![]() Image encoding method, image decoding method, image encoding apparatus, image decoding apparatus, an
专利摘要:
IMAGE ENCODING METHOD, IMAGE DECODING METHOD, IMAGE ENCODING APPLIANCE, IMAGE DECODING APPLIANCE, AND IMAGE ENCODING AND DECODING APPLIANCE. The present invention relates to an image encoding method which includes: adding, to a candidate list, a first adjacent motion vector as a candidate for a predicted motion vector to be used to encode the current motion vector ( S701); select the predicted motion vector from the candidate list (S702); and encoding the current motion vector (S), wherein, on addition (S701), the adjacent first motion vector indicating a position in a first reference image included in a first reference image list is added to the candidate list. to the current motion vector indicating a position in a second reference image included in a second reference image list. 公开号:BR112013002448B1 申请号:R112013002448-8 申请日:2011-12-27 公开日:2021-08-31 发明作者:Toshiyasu Sugio;Takahiro Nishi;Youji Shibahara;Hisao Sasai 申请人:Sun Patent Trust; IPC主号:
专利说明:
Technical Field The present invention relates to a picture encoding method of encoding a picture with prediction, and a picture decoding method of decoding a picture with prediction. Background technique An image encoding apparatus generally compresses an amount of information using image redundancy (including still images and moving images) in the spatial and temporal directions. Here, frequency domain transformation is used as the compression method using redundancy in the spatial direction. Furthermore, inter prediction is used as the compression method using redundancy in the temporal direction. Inter-prediction is also called inter-image prediction. When encoding a certain picture, the picture coding apparatus employing inter prediction uses, as a reference picture, a coded picture located before or after the current picture to be coded in display order. Subsequently, the image encoding apparatus estimates a motion vector of the current image with respect to the reference image. Next, the image encoding apparatus obtains predicted image data resulting from motion compensation based on motion vector. Then, the image encoding apparatus obtains a difference between image data of the current image and the predicted image data. Then, the image encoding apparatus encodes the difference obtained. In this way, the image encoding apparatus removes the redundancy in the temporal direction. The image encoding apparatus according to the moving image encoding scheme called H.264 (see Non-Patent Reference 1) which has already been standardized uses three types of images, ie, I image , P image and B image to compress the amount of information. The image coding apparatus does not perform intra-picture prediction on the I picture. In other words, the image coding apparatus performs intra-picture prediction on the I picture. The intra-picture prediction is also called intra-picture prediction. Furthermore, the image coding apparatus performs inter-picture prediction with reference to an encoded picture located before or after the current picture in the display order. Furthermore, the image coding apparatus performs in-frame prediction with reference to two coded images located before or after the current image in the display order. In inter prediction, the picture coding apparatus generates a reference list (also called a reference picture list) to identify a reference picture. In the list of references, reference image indices are allocated to coded reference images to be referenced in the inter prediction. For example, the image coding apparatus retains two reference lists (LO, L1) to refer to two images for the B image. Fig. 33 illustrates an example of reference lists. The first reference picture list (LO) of Fig. 33 is an example of a reference picture list corresponding to a first prediction direction for bidirectional prediction. In the first reference image list of Fig. 33, a reference image index indicated by 0 is allocated to a reference image R1 in a display order of 2. In addition, a reference image index indicated by 1 is allocated to an R2 reference image in a display order of 1. In addition, a reference image index indicated by 2 is allocated to an R3 reference image in a display order of 0. In other words, in the first reference picture list of Fig. 33, a lower reference picture index is allocated to a reference picture as the reference picture is closer to the current picture in display order. On the other hand, the second reference picture list (L1) of Fig. 33 is an example of a reference picture list corresponding to a second prediction direction for bidirectional prediction. In the second reference image list of Fig. 33, a reference image index indicated by 0 is allocated to the reference image R2 in display order 1. In addition, a reference image index indicated by 1 is allocated to the reference image R1 in display order 2. In addition, a reference image index indicated by 2 is allocated to reference image R3 in display order 0. As such, there are cases where two different reference image indices are allocated to a particular reference image (the reference image R1 or R2 in Fig. 33) included in the two reference image lists. Furthermore, there are cases where the same reference image index is allocated to a particular reference image (the reference image R3 in Fig. 33) included in the two reference image lists. Prediction using only the first reference picture list (L0) is called L0 prediction. Prediction using only the second reference picture list (L1) is called L1 prediction. The prediction using both the first reference image list and the second reference image list is called bidirectional prediction or bi-prediction. In L0 prediction, a forward direction is often used as a prediction direction. In L1 prediction, a backward direction is often used as a prediction direction. In other words, the first reference picture list corresponds to the first prediction direction, and the second reference picture list corresponds to the second prediction direction. Based on these relationships, the prediction direction is categorized into one of the first prediction direction, the second prediction direction, and the bi-direction. Furthermore, when the prediction direction is bidirectional, it can also be represented as bidirectional prediction or biprediction. The H.264 image coding scheme has a motion vector estimation mode as an encoding mode for the block to be encoded in the B picture. In the motion vector estimation mode, the image coding apparatus estimates a motion vector for a block to be coded with reference to a reference picture. The image coding apparatus generates predicted image data using the reference image and motion vector. Then, the image encoding apparatus encodes (i) a difference between the predicted image data and image data of the block to be encoded and (ii) the motion vector to be used to generate the predicted image data. The motion vector estimation mode can use bidirectional prediction to generate a predicted image with reference to two coded images located before or after the current image. In addition, the motion vector estimation mode can use unidirectional prediction to generate a predicted image with reference to an encoded image located before or after the current image. Then, one of bidirectional prediction and unidirectional prediction is selected for a block to be coded. When encoding a motion vector in motion vector estimation mode, the image encoding apparatus generates a predicted motion vector from a motion vector of a block, such as a encoded block adjacent to the current block. The image encoding apparatus encodes a difference between the motion vector and the predicted motion vector. In this way, the image encoding apparatus reduces the amount of information. The specific example will be described with reference to Figure 34. Figure 34 illustrates a current block to be coded, an adjacent block A, an adjacent block B and an adjacent block C. The adjacent block A is a coded block adjacent to the left of the block chain. Adjacent block B is an adjacent encoded block above the current block. Adjacent block C is a coded block adjacent to the right and above the current block. In Fig. 34, adjacent block A has been coded with bidirectional prediction, and has a motion vector MvL0_A in the first prediction direction, and a motion vector MvL1_A in the second prediction direction. Here, the motion vector in the first prediction direction is a motion vector indicating a position in a reference image identified by the first list of reference images. The motion vector in the second prediction direction is a motion vector indicating a position in a reference image identified by the second list of reference images. Furthermore, adjacent block B has been encoded with unidirectional prediction, and has a motion vector MvL0_B in the first prediction direction. Furthermore, the adjacent block C has been coded with bidirectional prediction, and has a motion vector MvL0_C in the first prediction direction, and a motion vector MvL1_C in the second prediction direction. Furthermore, the current block is a block to be coded with bidirectional prediction, and has a motion vector MvLO in the first prediction direction, and a motion vector MvL1 in the second prediction direction. The image coding apparatus generates a predicted motion vector PMvLO corresponding to the first prediction direction, using an adjacent block having a motion vector in the first prediction direction, by encoding the motion vector MvLO in the first prediction direction of the current block. . More specifically, the image encoding apparatus generates the predicted motion vector PMvLO using the motion vector MvL0_A of the adjacent block A, the motion vector MvL0_B of the adjacent block B, and the motion vector MvL0_C of the adjacent block C. In other words, the image encoding apparatus uses a motion vector in the first prediction direction of a block adjacent to the current block, by encoding the motion vector MvLO in the first prediction direction of the current block. Then, the image encoding apparatus encodes a difference between the motion vector MvLO and the predicted motion vector PMvLO. The predicted motion vector PMvLO is calculated using Median (MvL0_A, MvL0_B and MvL0_C) which is an equation to calculate a median value (central value) of the motion vectors MvL0_A, MvL0_B and MvL0_C. Median is represented by equations 1 to 3 below. The image coding apparatus generates a predicted motion vector PMvL1 corresponding to the second prediction direction, using an adjacent block having a motion vector in the second prediction direction, by encoding the motion vector MvL1 in the second prediction direction for the current block. More specifically, the image encoding apparatus generates the predicted motion vector PMvL1 using motion vector MvL1_A of adjacent block A and motion vector M-vL1_C of adjacent block C. In other words, the image coding apparatus uses a motion vector in the second prediction direction of a block adjacent to the current block, by encoding the motion vector MvL1 in the second prediction direction of the current block. Then, the image encoding apparatus encodes a differential motion vector which is a difference between motion vector MvL1 and predicted motion vector PMvL1. The predicted motion vector PMvL1 is calculated using Median (MvL1_A, 0 and MvL1_C) and others. Reference List Non-Patent Literature NPL 1: ITU-T H.264 03/2010 Invention Summary Technical problem When the number of motion vectors in the same prediction direction is smaller, the number of motion vectors to be used to calculate a predicted motion vector is smaller. In a case like this, the coding efficiency of motion vectors will not be improved. In the conventional method of calculating a predicted motion vector, the image encoding apparatus uses only motion vectors in the first prediction direction of adjacent blocks, when calculating the predicted motion vector PMvLO in the first prediction direction of the current block such as described above. Here, the image encoding apparatus does not use the motion vectors in the second prediction direction of the adjacent blocks. Furthermore, the image encoding apparatus uses only the motion vectors in the second prediction direction of adjacent blocks, when calculating the predicted motion vector PMvL1 in the second prediction direction of the current block. Here, the image encoding apparatus does not use the motion vectors in the first prediction direction of the adjacent blocks. In other words, the motion vectors of adjacent blocks to be used to calculate a predicted motion vector are limited in the conventional method. Thus, the ideal motion vector is not derived, and coding efficiency will not be improved. Thus, the present invention has an object to provide an image encoding method and an image decoding method for deriving a suitable predicted motion vector to improve the encoding efficiency of a motion vector. Solution to Problem In order to solve the problems, a picture encoding method according to an aspect of the present invention is a method of encoding a current picture per block with prediction using one or both of a first reference picture list and a second reference picture list. reference images, and includes: adding, to a candidate list, a first adjacent motion vector as a candidate for a predicted motion vector to be used to encode a current motion vector, the adjacent first motion vector being a vector of motion of a block adjacent to a current block included in the current image, and the current motion vector being a motion vector of the current block; selecting the predicted motion vector to be used to encode the current motion vector from the candidate list including the first adjacent motion vector; and encode the current motion vector using the selected predicted motion vector, wherein, on addition, the first adjacent motion vector is added to the candidate list for the current motion vector, the first adjacent motion vector indicating a position in a first reference image included in the first reference image list, and the current motion vector indicating a position in a second reference image included in the second reference image list. In this way, the adjacent motion vector corresponding to the first reference image list is added to the candidate list corresponding to the second reference image list. In this way, the number of predicted motion vector options increases. Thus, it is possible to derive a suitable predicted motion vector to improve the coding efficiency of the current motion vector. Furthermore, in addition, a second adjacent motion vector may be additionally included, the second adjacent motion vector being a motion vector of the adjacent block and indicating a position in a third reference image included in the second list of reference images. In this way, the adjacent motion vector corresponding to the second reference image list is added to the candidate list corresponding to the second reference image list. In this way, the number of predicted motion vector options increases. Thus, it is possible to derive a suitable predicted motion vector to improve the coding efficiency of the current motion vector. Furthermore, in addition: it can be determined whether or not the second reference image is identical to the third reference image; the second adjacent motion vector can be added to the candidate list when it is determined that the second reference image is identical to the third reference image; it can be determined whether or not the second reference image is identical to the first reference image; and the adjacent first motion vector can be added to the candidate list when it is determined that the second reference image is identical to the first reference image. In this way, only when the reference image corresponding to the current motion vector is identical to the reference image corresponding to the adjacent motion vector, the adjacent motion vector is added to the candidate list. Thus, only when the adjacent motion vector is suitable as a candidate for a predicted motion vector, is the adjacent motion vector added to the candidate list. Thus, an appropriate predicted motion vector is derived. Furthermore, in addition: it can be determined whether or not the second reference picture is identical to the first reference picture when it is determined that the second reference picture is not identical to the third reference picture; and the adjacent first motion vector may be added to the candidate list when it is determined that the second reference image is not identical to the third reference image and that the second reference image is identical to the first reference image. In this way, when the current motion vector corresponds to the second reference image list, the adjacent motion vector corresponding to the second reference image list is preferably added to the candidate list. Thus, a more suitable adjacent motion vector is added to the candidate list as a candidate for a predicted motion vector. Furthermore, in addition: it can be determined whether or not the second reference image is identical to the third reference image by determining whether a display order of the second reference image identified by the second reference image list and by a second reference index is or is not identical to a display order of the third reference picture identified by the second reference picture list and by a third reference index; and it can be determined whether or not the second reference picture is identical to the first reference picture by determining whether or not the display order of the second reference picture identified by the second reference picture list and the second reference index is identical to a display order of the first reference image identified by the first reference image list and a first reference index. In this way, whether or not the reference image identified by the first reference picture list is identical to the reference picture identified by the second reference picture list is appropriately determined based on the display orders. Furthermore, in addition, a motion vector having a magnitude of 0 can be added as the candidate for the predicted motion vector when it is determined that the second reference image is not identical to the third reference image and that the second reference image is reference is not identical to the first reference image. In this way, a decrease in the number of candidates is suppressed. Thus, a state where there is no candidate on the candidate list is avoided. In addition, in addition, a plurality of index values and a plurality of predicted motion vector candidates can be added to the candidate list so that the index values are in one-to-one correspondence with the predicted motion vector candidates , in selection an index value can be selected from the candidate list as the predicted motion vector, and in encoding the selected index value can be coded so that an index value code gets larger as the index value it's bigger. In this way, the selected predicted motion vector is properly coded. Thus, the encoder and decoder select the same predicted motion vector. In addition, on addition, the adjacent block's first adjacent motion vector can be added to the candidate list, the adjacent block being one from a left adjacent block, one adjacent block above, and one adjacent block above and right with relation to the current block. In this way, a plurality of adjacent motion vectors are added to the candidate list as candidates for the predicted motion vector. In this way, the number of predicted motion vector options increases. Furthermore, an image decoding method according to an aspect of the present invention may be a method of decoding a current image per block with prediction using one or both of a first reference picture list and a second reference picture list. , and includes: adding, to a candidate list, a first adjacent motion vector as a candidate for a predicted motion vector to be used to decode a current motion vector, the first adjacent motion vector being a motion vector of a block adjacent to a current block included in the current image, and the current motion vector being a motion vector of the current block; select the predicted motion vector to be used to decode the current motion vector from the candidate list including the first adjacent motion vector; and decoding the current motion vector using the selected predicted motion vector, wherein, on addition, the first adjacent motion vector can be added to the candidate list of the current motion vector, the first adjacent motion vector indicating a position in a first reference image included in the first reference image list, and the current motion vector indicating a position in a second reference image included in the second reference image list. In this way, the adjacent motion vector corresponding to the first reference image list is added to the candidate list corresponding to the second reference image list. In this way, the number of predicted motion vector options increases. Thus, it is possible to derive a suitable predicted motion vector to improve the coding efficiency of the current motion vector. Furthermore, in addition, a second adjacent motion vector may be additionally included, the second adjacent motion vector being a motion vector of the adjacent block and indicating a position in a third reference image included in the second list of reference images. In this way, the adjacent motion vector corresponding to the second reference image list is added to the candidate list corresponding to the second reference image list. In this way, the number of predicted motion vector options increases. Thus, it is possible to derive a suitable predicted motion vector to improve the coding efficiency of the current motion vector. Furthermore, in addition: it can be determined whether or not the second reference image is identical to the third reference image; the second adjacent motion vector can be added to the candidate list when it is determined that the second reference image is identical to the third reference image; it can be determined whether or not the second reference image is identical to the first reference image; and the adjacent first motion vector can be added to the candidate list when it is determined that the second reference image is identical to the first reference image. In this way, only when the reference image corresponding to the current motion vector is identical to the reference image corresponding to the adjacent motion vector, the adjacent motion vector is added to the candidate list. Thus, only when the adjacent motion vector is suitable as a candidate for a predicted motion vector, is the adjacent motion vector added to the candidate list. Thus, an appropriate predicted motion vector is derived. Furthermore, in addition: it can be determined whether or not the second reference image is identical to the first reference image when it is determined that the second reference image is not identical to the third reference image; and the adjacent first motion vector may be added to the candidate list when it is determined that the second reference image is not identical to the third reference image and that the second reference image is identical to the first reference image. In this way, when the current motion vector corresponds to the second reference image list, the adjacent motion vector corresponding to the second reference image list is preferably added to the candidate list. Thus, a more suitable adjacent motion vector is added to the candidate list as a candidate for a predicted motion vector. Furthermore, in addition: it can be determined whether or not the second reference image is identical to the third reference image by determining whether a display order of the second reference image is identified by the second reference image list and a second index of reference is or is not identical to a display order of the third reference picture identified by the second list of reference pictures and by a third reference index; and it can be determined whether or not the second reference picture is identical to the first reference picture by determining whether or not the display order of the second reference picture identified by the second reference picture list and the second reference index is identical to a display order of the first reference image identified by the first reference image list and a first reference index. In this way, whether or not the reference image identified by the first reference picture list is identical to the reference picture identified by the second reference picture list is appropriately determined based on the display orders. Furthermore, in addition, a motion vector having a magnitude of 0 can be added as the candidate for the predicted motion vector when it is determined that the second reference image is not identical to the third reference image and that the second reference image is not identical to the first reference image. In this way, a decrease in the number of candidates is suppressed. Thus, a state where there is no candidate on the candidate list is avoided. In addition, in addition, a plurality of index values and a plurality of predicted motion vector candidates can be added to the candidate list so that the index values are in one-to-one correspondence with the motion vector candidates predicted, in decoding, an index value can be decoded, the index value being encoded such that a code of the index value gets larger as the index value gets larger, and in selection, the corresponding predicted motion vector to decoded index value can be selected from the candidate list. In this way, the selected predicted motion vector is properly decoded. Thus, the encoder and decoder select the same predicted motion vector. In addition, on addition, the adjacent block's first adjacent motion vector can be added to the candidate list, the adjacent block being one from a left adjacent block, one adjacent block above, and one adjacent block above and right with relation to the current block. In this way, a plurality of adjacent motion vectors are added to the candidate list as candidates for the predicted motion vector. In this way, the number of predicted motion vector options increases. Furthermore, a picture encoding apparatus in accordance with an aspect of the present invention may be a picture encoding apparatus that encodes a current picture per block with prediction using one or both of a first reference picture list and a second list. of reference images, and includes: an add unit configured to add, to a candidate list, a first adjacent motion vector as a candidate for a predicted motion vector to be used to encode a current motion vector, the first adjacent motion vector being a motion vector of a block adjacent to a current block included in the current image, and the current motion vector being a motion vector of the current block; a selection unit configured to select the predicted motion vector to be used to encode the current motion vector, from the candidate list including the first adjacent motion vector; and an encoding unit configured to encode the current motion vector using the selected predicted motion vector, wherein the addition unit can be configured to add the first adjacent motion vector to the current motion vector candidate list, the adjacent first motion vector indicating a position in a first reference image included in the first reference image list, and the current motion vector indicating a position in a second reference image included in the second reference image list. In this way, the image encoding method is implemented as the image encoding apparatus. Furthermore, an image decoding apparatus in accordance with an aspect of the present invention may be an image decoding apparatus that decodes a current picture per block with prediction using one or both of a first reference picture list and a second list of reference images, and includes: an addition unit configured to add, to a candidate list, a first adjacent motion vector as a candidate for a predicted motion vector to be used to decode a current motion vector, the first adjacent motion vector being a motion vector of a block adjacent to a current block included in the current image, and the current motion vector being a motion vector of the current block; a selection unit configured to select the predicted motion vector to be used to decode the current motion vector, from the candidate list including the first adjacent motion vector; and a decoding unit configured to decode the current motion vector using the selected predicted motion vector, wherein the addition unit can be configured to add the first adjacent motion vector to the candidate list to the current motion vector, the adjacent first motion vector indicating a position in a first reference image included in the first reference image list, and the current motion vector indicating a position in a second reference image included in the second reference image list. In this way, the image decoding method is implemented as the image decoding apparatus. Furthermore, an image encoding and decoding apparatus in accordance with an aspect of the present invention may be an image encoding and decoding apparatus that encodes a current picture per block and decodes a current picture per block, with prediction using a or both of a first reference image list and a second reference image list, and includes: an addition unit configured to add, to a candidate list, an adjacent first motion vector as a motion vector candidate predicted to be used to encode or decode a current motion vector, the first adjacent motion vector being a motion vector of a block adjacent to a current block to be processed and included in the current image to be encoded or decoded, and the vector of current motion being a motion vector of the current block; a selection unit configured to select the predicted motion vector to be used to encode or decode the current motion vector, from the candidate list including the first adjacent motion vector; an encoding unit configured to encode the current motion vector using the selected predicted motion vector; and a decoding unit configured to decode the current motion vector using the selected predicted motion vector, wherein the addition unit can be configured to add the first adjacent motion vector to the candidate list for the current motion vector, the first adjacent motion vector indicating a position in a first reference image included in the first reference image list, and the current motion vector indicating a position in a second reference image included in the second reference image list. In this way, the image encoding and decoding apparatus implements both the functions of the image encoding apparatus and the functions of the image decoding apparatus. Advantageous Effects of the Invention According to the present invention, a predicted motion vector suitable for improving the coding efficiency of a motion vector is derived. In this way, it is possible to improve the motion vector coding efficiency. Brief Description of the Drawings Fig. 1 illustrates a configuration of an image encoding apparatus in accordance with Embodiment 1; Figure 2 illustrates a flowchart of operations performed by the image encoding apparatus according to Modality 1; Figure 3 illustrates a flowchart of processes for determining a prediction direction according to Modality 1; Figure 4 illustrates a flowchart of processes for calculating a candidate list according to Modality 1; Fig. 5 illustrates a flowchart of processes for determining an addition signaling according to Modality 1; Figure 6A illustrates an example of a candidate list for the first prediction direction according to Modality 1; Fig. 6B illustrates an example of a candidate list for the second prediction direction according to Modality 1; Fig. 7 illustrates an example of predicted motion vector index codes according to Modality 1; Figure 8 illustrates processes for selecting a predicted motion vector according to Modality 1; Fig. 9 illustrates a configuration of an image decoding apparatus according to Modality 2; Fig. 10 illustrates a flowchart of operations performed by the image decoding apparatus according to Modality 2; Fig. 11A illustrates a configuration of an image encoding apparatus according to Modality 3; Fig. 11B illustrates a flowchart of operations performed by the image encoding apparatus in accordance with Modality 3; Figure 12A illustrates a configuration of an image decoding apparatus in accordance with Embodiment 4; Fig. 12B illustrates a flowchart of operations performed by the image decoding apparatus according to Modality 4; Fig. 13 illustrates a configuration of an image encoding and decoding apparatus according to Modality 5; Figure 14 illustrates an overall configuration of a content delivery system for implementing content delivery services; Figure 15 illustrates an overall configuration of a digital broadcast system; Figure 16 shows a block diagram illustrating an example of a television configuration; Fig. 17 shows a block diagram illustrating an example of a configuration of an information reproduction/recording unit that reads and writes information on a recording medium which is an optical disc; Figure 18 illustrates an example of a configuration of a recording medium which is an optical disc; Figure 19A illustrates an example of a cell phone; Figure 19B illustrates an example of a cell phone configuration; Figure 20 illustrates a multiplexed data structure; Figure 21 schematically illustrates how each of the streams is multiplexed into multiplexed data; Figure 22 illustrates how a video stream is stored in a PES packet stream in more detail; Fig. 23 illustrates a structure of TS packets and source packets in the multiplexed data; Fig. 24 illustrates a data structure of a PMT; Fig. 25 illustrates an internal information structure of multiplexed data; Fig. 26 illustrates an internal structure of flow attribute information; Figure 27 illustrates steps to identify video data; Fig. 28 shows a block diagram illustrating an example of a configuration of an integrated circuit for implementing the moving picture encoding method and the moving picture decoding method according to each of the embodiments; Figure 29 illustrates a configuration for switching between drive frequencies; Figure 30 illustrates steps for identifying video data and switching between drive frequencies; Fig. 31 illustrates an example of a look-up table in which video data patterns are associated with trigger frequencies; Fig. 32A illustrates an example of an arrangement for sharing a module of a signal processing unit; Fig. 32B illustrates another example of a configuration for sharing a module of a signal processing unit; Figure 33 illustrates an example of two reference image lists; and Fig. 34 illustrates an example of the current block to be coded and the three adjacent blocks. Description of Modalities Embodiments of the present invention will be described with reference to drawings. Embodiments described below indicate favorable and specific examples of the present invention. The values, shapes, materials, component elements, positions and connections of the component elements, steps and orders of steps indicated in the embodiments are examples, and do not limit the present invention. The present invention is limited only in accordance with the claims. Although component elements that are not described in independent claims that describe the more general concept of the present invention are not necessary to solve the problems of the present invention, they are described as components of favorable embodiments. Furthermore, the first reference picture list corresponds to the L0 prediction, and the second reference picture list corresponds to the L1 prediction. Furthermore, the first reference picture list corresponds to the first prediction direction, and the second reference picture list corresponds to the second prediction direction. Conversely, the first reference picture list can correspond to L1 prediction and the second reference picture list can correspond to L0 prediction. Similarly, the first reference image list may correspond to the second prediction direction, and the second reference image list may correspond to the first prediction direction. Modality 1 Fig. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to Modality 1. An image encoding apparatus 100 in Fig. 1 includes an orthogonal transform unit 102, a quantize unit 103, an inverse quantize unit 105, an inverse orthogonal transform unit 106, a block memory 108, a frame memory 109 an intra prediction unit 110, an inter prediction unit 111, an inter prediction control unit 114, an image type determining unit 113, a reference image list management unit 115, a determination unit of addition 116, a variable-length encoding unit 104, a subtraction unit 101, an addition unit 107 and a switching unit 112. The orthogonal transform unit 102 performs predicted error data transformation between predicted image data generated by a unit to be described later and into a sequence of input images from an image domain to a frequency domain. The quantizing unit 103 quantifies the transformed predicted error data into the frequency domain. The inverse quantization unit 105 inversely quantizes the predicted error data quantized by the quantization unit 103. The inverse orthogonal transformation unit 106 performs transformation on the inversely quantized predicted error data by the inverse quantization unit 105 from the frequency domain to the frequency domain. Image. Block memory 108 is a memory for storing a decoded image generated from the predicted image data and the inversely quantized predicted error data by the inverse quantization unit 105 per block. Frame memory 109 is a memory for storing the decoded image per frame. The picture type determining unit 113 determines in which picture type an input picture sequence is encoded, whether I picture, B picture or P picture, and generates picture type information. The intra prediction unit 110 generates the predicted image data by means of intra prediction of the current block, using the decoded image stored by block in block memory 108. The inter prediction unit 111 generates the predicted image data by means of prediction inter of the current block, using the decoded image stored per frame in frame memory 109. The reference image list management unit 115 generates a reference list with the reference image index display orders to allocate the reference image indices for coded reference images to be referenced in the inter prediction. Although the reference image list management unit 115 manages reference images by reference image indices and display orders in Modality 1, it can manage reference images by reference image indices and by orders of coding. The addition determination unit 116 determines whether or not a candidate for a predicted motion vector (candidate predicted motion vector) is added with reference to the first and second reference image lists generated by the reference image list management unit 115. More specifically, the addition determination unit 116 determines whether or not a candidate predicted motion vector in the first prediction direction is added to a candidate list for the second prediction direction of the coded block, in a method to be described later. Then, the addition determination unit 116 sets an addition signaling. The prediction control unit inter 114 determines a predicted motion vector to be used for encoding in order to encode a motion vector using one of the candidate predicted motion vectors having the smallest error with the motion vector derived from the motion estimate. Here, the error is a difference value between the candidate predicted motion vector and the motion vector derived from the motion estimate. In addition, the prediction control unit inter 114 generates a predicted motion vector index corresponding to the determined predicted motion vector, per block. The prediction control unit inter 114 transmits the predicted motion vector index, the error information of the candidate predicted motion vectors and the reference picture indices to the variable length coding unit 104. The variable-length encoding unit 104 variable-length encodes the quantized prediction error data, an inter prediction direction signaling, the reference picture indices and the picture type information to generate a bit stream. Fig. 2 is the process schematic procedure of the image coding method according to Modality 1. The prediction control unit inter 114 determines a prediction direction when the current block is coded in the mode vector estimation mode. -movement (S101). Next, the inter prediction control unit 114 determines whether or not the prediction direction in motion vector estimation mode is bidirectional prediction (S102). When the prediction direction is bidirectional prediction (Yes in S102), the prediction control unit inter 114 calculates a list of candidate predicted motion vectors for each of the first and second prediction directions in a method to be described later. (S103, S104). Next, the addition determination unit 116 determines whether or not the candidate predicted motion vector in the first prediction direction is to be added to the list of candidate predicted motion vectors for the second prediction direction (S105). When the addition determination unit 116 determines that the candidate predicted motion vector in the first prediction direction should be added (Yes in S105), the inter prediction control unit 114 adds the candidate predicted motion vector in the first prediction direction to the list of candidate predicted motion vectors for the second prediction direction (S106). Next, the prediction control unit inter 114 selects the predicted motion vector in the first prediction direction from the list of candidate predicted motion vectors for the first prediction direction, and the predicted motion vector in the second prediction direction of the list. of candidate predicted motion vectors for the second prediction direction. Then, the variable-length encoding unit 104 encodes the predicted motion vector indices corresponding to the selected predicted motion vectors, and adds the indices to a bit stream (S107). When the prediction direction in motion vector estimation mode is unidirectional prediction (Not in S102), the prediction control unit inter 114 determines whether the prediction direction in motion vector estimation mode is second or not. prediction direction (S108). When the prediction direction is the second prediction direction (Yes in S108), the prediction control unit inter 114 calculates a candidate predicted motion vector in the second prediction direction (S109). Next, the addition determination unit 116 determines whether or not the candidate predicted motion vector in the first prediction direction should be added to the list of candidate predicted motion vectors for the second prediction direction (S110). When the addition determination unit 116 determines that the candidate predicted motion vector in the first prediction direction should be added (Yes in S110), the inter prediction control unit 114 adds the candidate predicted motion vector in the first prediction direction to the list of candidate predicted motion vectors for the second prediction direction (S111). Next, the prediction control unit inter 114 selects the predicted motion vector in the second prediction direction from the list of candidate predicted motion vectors for the second prediction direction. Then, the variable-length encoding unit 104 encodes a predicted motion vector index corresponding to the selected predicted motion vector, and adds the encoded index to a bit stream (S112). When the prediction direction is not the second prediction direction (Not in S108), the prediction control unit inter 114 calculates a candidate predicted motion vector in the first prediction direction (S113). Next, the prediction control unit inter 114 selects the predicted motion vector in the first prediction direction from the list of candidate predicted motion vectors for the first prediction direction. Then, the variable-length encoding unit 104 encodes a predicted motion vector index corresponding to the selected predicted motion vector, and adds the encoded index to a bit stream (S114). Finally, the variable length encoding unit 104 encodes a reference picture index and an inter prediction direction signaling indicating a motion vector estimation mode prediction direction, and adds the inter prediction direction signaling. and the picture index of reference to a bit stream (S115). In the following, a method of determining a prediction direction in motion vector estimation mode (S101) in figure 2 will be described in detail with reference to a process procedure in figure 3. Inter prediction control unit 114 performs motion estimation. movement in the reference image identified by the reference image index in the first prediction direction and the reference image identified by the reference image index in the second prediction direction. Then, the prediction control unit inter 114 generates the first and second motion vectors corresponding to the two reference pictures (S201). Here, the prediction control unit inter 114 calculates difference values between the current block to be coded in a picture to be coded and the blocks in each of the reference pictures in the motion estimate. Then, the inter prediction control unit 114 determines the block having the smallest difference value as a reference block, among the blocks in the reference image. Then, the prediction control unit inter 114 calculates a motion vector with reference to a position of the current block and a position of the reference block. Next, the prediction unit inter 111 generates a predicted image in the first prediction direction, using the first calculated motion vector. The inter prediction control unit 114 calculates Costl which is a cost when the current block is coded using the predicted image, for example by means of an R-D optimization model represented by the following equation 4 (S202). Cost = D + À x R (Equation 4) In Equation 4, D denotes coding artifacts. More specifically, D is, for example, a sum of the absolute differences between (i) pixel values obtained by encoding and decoding the current block using the predicted image generated from a certain motion vector and (ii) original pixel values of the current block. Also, R denotes a quantity of generated codes. More specifically, R is, for example, a number of codes needed to encode a motion vector used to generate a predicted image. Furthermore, A denotes an indeterminate multiplier Lagrange method. Next, the prediction unit inter 111 generates a predicted image in the second prediction direction, using the second calculated motion vector. Then, the prediction control unit inter 114 calculates Cost2 by Equation 4 (S203). Next, the prediction unit inter 111 generates a bidirectional predicted image using the calculated first and second motion vectors. Here, the prediction unit inter 111 generates the bidirectional predicted image by averaging, per pixel, the predicted image obtained from the first motion vector and the predicted image obtained from the second motion vector. Then, the prediction control unit inter 114 calculates CostBi by Equation 4 (S204). Then, the interprediction control unit 114 compares Costl, Cost2 and CostBi (S205). When CostBi is the smallest (Yes in S205), the inter prediction control unit 114 determines the bidirectional prediction as the prediction direction of motion vector estimation mode (S206). When CostBi is not the smallest (Not in S205), the prediction control unit inter 114 compares Costl and Cost2 (S207). When Costl is smaller (Yes in S207), the prediction control unit inter 114 determines the one-way prediction in the first prediction direction as motion vector estimation mode (S208). When Costl is not smaller (Not in S207), the prediction control unit inter 114 determines the one-way prediction in the second prediction direction as the motion vector estimation mode (S209). Although the prediction unit inter 111 calculates the average of the images for each of the pixels when the bidirectional predicted image is generated in Modality 1, it can calculate a weighted average of the images and others. In the following, a method of computing a list of candidate predicted motion vectors in figure 2 (S103, S104, S109 and S113) will be described in detail with reference to a process procedure in figure 4. determines an adjacent A block to the left of the current block, an adjacent B block above the current block, and an adjacent C block above and to the right of the current block (S301). For example, the inter-prediction control unit 114 determines, as the adjacent block A, a block to which an adjacent pixel to the left of the pixel located in the upper left corner of the current block belongs. Furthermore, the inter-prediction control unit 114 determines, as the adjacent block B, a block to which an adjacent pixel above the pixel located in the upper left corner of the current block belongs. Furthermore, the inter-prediction control unit 114 determines, as the adjacent block C, a block to which a pixel adjacent to the upper right corner and above the current block belongs. Next, the inter prediction control unit 114 determines whether or not each of the adjacent blocks A, B and C satisfies the two conditions (S302). One of the conditions is that adjacent block N (N is one of A, B, and C) has a motion vector in a prediction direction identical to that of the motion vector of the current block. The other is that a reference picture of the adjacent block N is identical to that of the current block. When adjacent block N satisfies both conditions (Yes in S302), the inter prediction control unit 114 adds adjacent motion vectors from adjacent block N to a list of candidate predicted motion vectors (S303). In addition, the inter prediction control unit 114 calculates a median value (center value) of the motion vectors of the adjacent block, and adds the median value to the candidate predicted motion vector list (S304). The prediction control unit inter 114 adds the motion vector of the adjacent block having the prediction direction identical to that of the corresponding motion vector of the current block to the list of candidate predicted motion vectors. Then, the inter prediction control unit 114 does not add a motion vector of the adjacent block having a different prediction direction than the motion vector of the current block. However, the inter prediction control unit 114 can add an adjacent block motion vector having a different prediction direction than the current block motion vector to the list of candidate predicted motion vectors by setting the motion vector to be added to 0. Next, a method of determining an addition signaling in Fig. 2 (S105, S110) will be described. There is a case where the reference image indicated by the reference index of the first prediction direction of the adjacent block is identical to the reference image indicated by the reference index of the second prediction direction of the current block. Generally speaking, the motion vector in the first prediction direction of the adjacent block tends to have a value relatively close to the value of the motion vector in the second prediction direction of the current block. Thus, in such a case, the inter prediction control unit 114 adds the motion vector in the first prediction direction of the adjacent block as a candidate predicted motion vector in the second prediction direction of the current block. In other words, the inter prediction control unit 114 adds the candidate predicted motion vector in the first prediction direction of the current block as the candidate predicted motion vector in the second prediction direction. As such, the image coding apparatus 100 adds not only the motion vector in the second prediction direction of the adjacent block, but also the motion vector in the first prediction direction, as the candidate predicted motion vectors in the second prediction direction. of the current block to perform efficient encoding. In Modality 1, not limited to this configuration, the prediction control unit inter 114 adds the candidate predicted motion vector in the first prediction direction of the current block as the candidate predicted motion vector in the second prediction direction. For example, there is a case where the reference image in the second prediction direction of the adjacent block is identical to the reference image in the first prediction direction of the current block. Thus, in such a case, the inter prediction control unit 114 can add the motion vector in the second prediction direction of the adjacent block as a candidate predicted motion vector in the first prediction direction of the current block. In other words, the inter prediction control unit 114 can add the candidate predicted motion vector in the second prediction direction of the current block as the candidate predicted motion vector in the first prediction direction. In this configuration, the image encoding apparatus 100 can efficiently encode the motion vectors. Furthermore, the variable-length encoding unit 104 can encode the addition signaling, and add the signaling to a bit stream. In this way, a decoder can determine whether or not the candidate predicted motion vector in the first prediction direction should be added with reference to the addition signaling. Thus, the amount of computation in decoding can be reduced. In addition, the variable-length encoding unit 104 can add one addition signaling per block. In this way, flexible switching is possible. In addition, the variable-length encoding unit 104 can add per-picture plus signaling. In this way, it is possible to improve the encoding efficiency and reduce the amount of computation by the decoder. Next, a method of determining a plus flag will be described in detail with reference to Figure 5. The addition determination unit 116 obtains a reference picture index of the second prediction direction of the current block (S401). Furthermore, the prediction control unit inter 114 obtains reference picture indices of the first prediction direction of the adjacent blocks A, B and C (S402). Next, the addition determining unit 116 determines whether or not the reference image indicated by the reference image index of the second prediction direction of the current block is identical to the reference image indicated by the reference image index of the first direction of adjacent block prediction (S403). Here, the addition determination unit 116 makes the determination using the first and second reference picture lists. For example, the addition determination unit 116 obtains, from the second list of reference pictures, the display order of the reference picture indicated by the reference picture index of the second prediction direction of the current block. Furthermore, the addition determination unit 116 obtains, from the first reference picture list, the display order of the reference picture indicated by the reference picture index of the first prediction direction of the adjacent block. The addition determination unit 116 compares these two display orders. By determining that the orders are identical to each other, the addition determination unit 116 determines that the two reference pictures are identical. When the reference picture in the second prediction direction of the current block is identical to the reference picture in the first prediction direction of the adjacent block (Yes in S403), the addition determining unit 116 turns on the addition signaling (S404). When the reference picture in the second prediction direction of the current block is not identical to the reference picture in the first prediction direction of the adjacent block (Not in S403), the addition determination unit 116 turns off the addition signaling (S405 ). In Modality 1, the addition determination unit 116 determines whether or not the two reference images are identical to each other with reference to display orders. Meanwhile, the addition determination unit 116 can determine whether or not the two reference pictures are identical to each other with reference to coding orders and so on. Furthermore, the addition determination unit 116 can perform the processes in Fig. 5 only when a result of the determination in Fig. 4 is false (Not in S302). When a result of the determination in Fig. 4 is true (Yes in S302), the inter prediction control unit 114 adds the motion vector in the second prediction direction of the adjacent block as a candidate predicted motion vector in the second prediction direction of the current block. Here, again adding the motion vector in the first prediction direction of the adjacent block as a candidate predicted motion vector in the second prediction direction of the current block is redundant. Thus, the addition determination unit 116 can perform the processes in Fig. 5 only when a result of the determination in Fig. 4 is false (Not in S302). In this way, only when the motion vector in the second prediction direction of the adjacent block is not the candidate predicted motion vector in the second prediction direction of the current block, the inter prediction control unit 114 can add the motion vector in the first adjacent block prediction direction as a candidate predicted motion vector in the second prediction direction of the current block. In this way, it is possible to improve coding efficiency. The following is an example of a list of candidate predicted motion vectors generated with the processes (S103 to S106) in Figure 2 when the current block has motion vector MvLO in the first prediction direction and motion vector MvL1 in the second direction of prediction as illustrated in Fig. 34 will be described with reference to Figs. 6A and 6B. The following relationship will be assumed in Figure 34. In other words, the reference image in the first prediction direction of the current block is identical to the reference image in the first prediction direction of each of the adjacent blocks A, B and C. , the reference image in the second prediction direction of the current block, the reference image in the second prediction direction of each of the adjacent blocks A and C, and the reference image in the first prediction direction of the adjacent block B are identical to a the other. In the list of candidate predicted motion vectors for the first prediction direction in Figure 6A, the predicted motion vector index corresponding to Median (MvLOA, MvLOB, MvLO C) is 0. The predicted motion vector index corresponding to the vector of motion MvL0_A is 1. The predicted motion vector index corresponding to motion vector MvL0_B is 2. The predicted motion vector index corresponding to motion vector MvL0_C is 3. In the list of candidate predicted motion vectors for the second prediction direction of Figure 6B, the predicted motion vector index corresponding to Median (MvL1_A, MvL0_B, MvL1_C) is 0. The predicted motion vector index corresponding to the motion vector MvL1_A is 1. The predicted motion vector index corresponding to motion vector MvL0_B is 2. The predicted motion vector index corresponding to motion vector MvL1_C is 3. Here, when the candidate predicted motion vector list for the second prediction direction does not have a motion vector MvL1_B in the second prediction direction of the adjacent block B, the inter prediction control unit 114 adds the motion vector MvL0_B in the first prediction direction to the list of candidate predicted motion vectors for the second prediction direction. As such, when an adjacent block has no motion vector in the second prediction direction, but instead has a motion vector in the first prediction direction, the prediction control unit inter 114 adds the motion vector in the first direction of block prediction adjacent to the list of candidate predicted motion vectors for the second prediction direction. In this way, it is possible to improve coding efficiency. When the list of candidate predicted motion vectors for the second prediction direction has no adjacent block motion vector, the inter prediction control unit 114 does not allocate any predicted motion vector index. In this way, it is possible to improve the coding efficiency. Also, the method of allocating the predicted motion vector index is not limited to this example. When no motion vector is not present, the inter prediction control unit 114 can allocate the predicted motion vector index by adding a motion vector having magnitude 0 to the list of candidate predicted motion vectors. Figure 7 illustrates an example of a code table for variable-length encoding of predicted motion vector indices. As a predicted motion vector index is smaller, the code is smaller. The prediction control unit inter 114 allocates a smaller predicted motion vector index to an estimated candidate with greater prediction accuracy. In this way, it is possible to improve coding efficiency. In the example of the candidate predicted motion vector list for the second prediction direction in Fig. 6B, the prediction control unit inter 114 allocates the predicted motion vector index indicated by 2 to the motion vector MvL0_B in the first prediction direction of adjacent block B. However, the inter prediction control unit 114 can allocate a smaller predicted motion vector index to a candidate in the same prediction direction. More specifically, the prediction control unit inter 114 allocates 0 for a predicted motion vector index corresponding to Median (MvL1_A, MvL0_B, MvL1_C) in the list of candidate predicted motion vectors for the second prediction direction. In addition, the prediction control unit inter 114 allocates 1 to a predicted motion vector index corresponding to motion vector MvL1_A. In addition, the prediction control unit inter 114 allocates 2 to a predicted motion vector index corresponding to motion vector MvL1_C. Furthermore, the prediction control unit inter 114 allocates 3 to a predicted motion vector index corresponding to motion vector MvL0_B. In this way, the same prediction direction is prioritized, and smaller predicted motion vector indices are allocated to the estimated candidate predicted motion vectors to have greater prediction accuracy. In the following, a method of selecting a predicted motion vector (S107, S112 and S114) of figure 2 will be described in detail with reference to a process procedure in figure 8. Inter prediction control unit 114 sets 0 to a value of counter for initialization, and sets the largest value to the smallest differential motion vector (S501). Next, the inter prediction control unit 114 determines whether differential motion vectors of all candidate predicted motion vectors are calculated or not (S502). When the candidate predicted motion vector still exists (Yes in S502), the inter prediction control unit 114 calculates the differential motion vector by subtracting the candidate predicted motion vector from a motion estimation result vector (S503). Next, the prediction control unit inter 114 determines whether or not the calculated differential motion vector is less than the smallest differential motion vector (S504). When the differential motion vector is less than the smallest differential motion vector (Yes in S504), the prediction control unit inter 114 updates the smallest differential motion vector and the predicted motion vector index (S505). Next, the prediction control unit inter 114 adds 1 to the counter value (S506). Then, the inter prediction control unit 114 again determines whether the next candidate predicted motion vector exists or not (S502). When the inter prediction control unit 114 determines that the differential motion vectors for all candidate predicted motion vectors are calculated (Not in S502), it transmits the smallest differential motion vector and the predicted motion vector index which are finally determined to the variable-length encoding unit 104, and causes the variable-length encoding unit 104 to encode the smallest differential motion vector and the predicted motion vector index (S507). According to Modality 1, when selecting a motion vector from an adjacent block as a candidate motion vector, the inter prediction control unit 114 adopts a new selection criterion for the selection. In this way, the prediction control unit inter 114 derives a more suitable predicted motion vector for encoding a motion vector of the current image. In this way, it is possible to improve coding efficiency. In particular, there is a case where the reference image indicated by the reference image reference index of the second prediction direction of the current block is identical to the reference image indicated by the reference image reference index of the first prediction direction of the block. adjacent. In such a case, the inter prediction control unit 114 adds the motion vector in the first prediction direction of the adjacent block as the candidate predicted motion vector in the second prediction direction of the current block. Thus, efficient coding is possible. In Modality 1, the inter prediction control unit 114 adds the motion vector in the first prediction direction of the adjacent block to the list of candidate predicted motion vectors for the second prediction direction of the current block. However, the inter prediction control unit 114 can add the motion vector in the second prediction direction of the adjacent block to the list of candidate predicted motion vectors for the first prediction direction of the current block. Modality 2 Fig. 9 is a block diagram illustrating a configuration of an image decoding apparatus according to Modality 2. As illustrated in Fig. 9, an image decoding apparatus 200 includes a variable length decoding unit 204, an inverse quantization unit 205, an inverse orthogonal transform unit 206, an addition unit 207, a block memory 208 , a frame memory 209, an intra prediction unit 210, an inter prediction unit 211, a switching unit 212, an inter prediction control unit 214, a reference picture list management unit 215 and a unit of addition determination 216. The variable-length decoding unit 204 variable-length decodes an input bit stream. Then, the variable-length decoding unit 204 generates an image type, a reference image index, inter prediction direction information, a predicted motion vector index, and quantized coefficients. The inverse quantization unit 205 inversely quantizes the quantized coefficients. The inverse orthogonal transform unit 206 performs transformation on the inversely quantized orthogonal transform coefficients from the frequency domain to the image domain to generate prediction error image data. Block memory 208 is a memory for storing a sequence of images generated by adding the predicted image data to the prediction error image data, per block. Frame memory 209 is a memory for storing the sequence of images per frame. The intra prediction unit 210 generates the predicted image data of a block to be decoded by means of intra prediction using the sequence of images stored per block in the block memory 208. The inter prediction unit 211 generates the predicted image data of the block to be decoded by means of inter prediction using the image sequence stored per frame in the frame memory 209. The inter prediction control unit 214 controls a method of generating a motion vector and image data predicted in the inter prediction according to the image type, the reference image index, the inter prediction direction information and the predicted motion vector index. The reference image list management unit 215 generates a reference list with the reference image index display orders to allocate the reference image indices for decoded reference images to be referred to in the inter prediction (similar to the figure 33). Image B is used for encoding with reference to two images. Thus, the reference image list management unit 215 holds two reference lists. The reference image list management unit 215 manages the reference images by reference image indices and display orders in Modality 2. However, the reference image list management unit 215 can manage the reference images by reference picture indices and by coding orders (decoding orders). The addition determination unit 216 determines whether or not a candidate predicted motion vector in the first prediction direction should be added to a list of candidate predicted motion vectors for the second prediction direction of the block to be decoded with reference to the first and second reference picture lists generated by the reference picture list management unit 215. Then, the addition determination unit 216 sets an addition signaling. Since the procedure for determining the addition signaling is the same as that in figure 5 according to Modality 1, the description of the same is omitted. Finally, the adding unit 207 adds the decoded prediction error image data to the predicted image data to generate a sequence of decoded images. Fig. 10 is a process schematic procedure of an image decoding method according to Modality 2. First, the inter prediction control unit 214 determines whether a decoded prediction direction is a bi-direction or not (S601). When the decoded prediction direction is bi-direction (Yes in S601), the prediction control unit inter 214 calculates candidate predicted motion vector lists for the first and second prediction directions (S602, S603). Figure 4 according to Modality 1 is used to calculate lists of candidate predicted motion vectors. The prediction control unit inter 214 decodes the reference picture indices of the first and second prediction directions of a bit stream. The addition determination unit 216 determines whether or not a candidate predicted motion vector in the first prediction direction is to be added to the list of candidate predicted motion vectors for the second prediction direction (S604). When addition signaling is ON, (Yes in S604), the inter prediction control unit 214 adds the candidate predicted motion vector in the first prediction direction to the list of candidate predicted motion vectors for the second prediction direction (S605 ). The addition signaling indicating whether the candidate predicted motion vector in the first prediction direction should be added or not is set in the same way as in figure 5 according to Modality 1. The prediction control unit inter 214 selects the predicted motion vectors indicated by the predicted motion vector indices of the first and second prediction directions which are decoded from the bit stream, from candidate predicted motion vector lists for the first and second prediction directions. The prediction control unit inter 214 adds differential motion vectors in the first and second prediction directions which are decoded from the bit stream, to the predicted motion vectors in the first and second prediction directions. In this way, the prediction control unit inter 214 decodes the motion vectors in the first and second prediction directions (S606). When the decoded prediction direction is not bi-direction (Not in S601), that is, when the inter prediction direction is a direction, the inter prediction control unit 214 determines whether the prediction direction is the second direction of prediction (S607). When the prediction direction is the second prediction direction (Yes in S607), the prediction control unit inter 214 calculates a candidate predicted motion vector in the second prediction direction (S608). The addition determination unit 216 determines whether or not a candidate predicted motion vector in the first prediction direction is to be added to the list of candidate predicted motion vectors for the second prediction direction (S609). When addition signaling is turned ON, (Yes in S609), the prediction control unit inter 214 adds the candidate predicted motion vector in the first prediction direction to the list of candidate predicted motion vectors for the second prediction direction (S610 ). The prediction control unit inter 214 selects the predicted motion vector indicated by the predicted motion vector index of the second prediction direction which is decoded from the bit stream, from the list of candidate predicted motion vectors for the second prediction direction. The prediction control unit inter 214 adds the selected predicted motion vector to the differential motion vector in the second prediction direction which is decoded from the bit stream, thus decoding the motion vector in the second prediction direction (S611). When the prediction direction is not the second prediction direction (Not in S607), that is, when the prediction direction is the first prediction direction, the prediction control unit inter 214 calculates a candidate predicted motion vector in the first prediction direction (S612). The prediction control unit inter 214 selects the predicted motion vector indicated by the predicted motion vector index of the first prediction direction which is decoded from the bit stream, from the list of candidate predicted motion vectors for the first prediction direction. Then, the inter prediction control unit 214 adds the selected predicted motion vector to the differential motion vector in the first prediction direction which is decoded from the bit stream, thus decoding the motion vector in the first prediction direction (S613). According to Modality 2, when selecting a motion vector from an adjacent block as a candidate motion vector, the inter prediction control unit 214 adopts a new selection criterion for the selection. In this way, a predicted motion vector best suited for decoding a motion vector is derived. Also, coding efficiency will be improved. In particular, there is a case where the reference picture indicated by the reference picture reference index of the second prediction direction of the current block to be decoded is identical to the reference picture indicated by the reference picture reference index of the first direction of prediction of the adjacent block. In such a case, the inter prediction control unit 214 adds the motion vector in the first prediction direction of the adjacent block as a candidate predicted motion vector in the second prediction direction of the current block to be decoded. In this way, coding efficiency will be improved. The inter prediction control unit 214 according to Modality 2 adds the motion vector in the first prediction direction of the adjacent block to the list of candidate predicted motion vectors for the second prediction direction of the current block. Meanwhile, the inter prediction control unit 214 can add the motion vector in the second prediction direction of the adjacent block to the list of candidate predicted motion vectors for the first prediction direction of the current block. Modality 3 Embodiment 3 further describes an image encoding apparatus including the characteristic component elements of the image encoding apparatus 100 according to Embodiment 1. Figure 11A illustrates a configuration of the image encoding apparatus according to Embodiment 3 An image coding apparatus 300 in Fig. 11A includes an addition unit 301, a selection unit 302 and a coding unit 303. The addition unit 301 mainly corresponds to the addition determination unit 116 according to Modality 1 The selection unit 302 mainly corresponds to the inter prediction control unit 114 according to the Modality 1. The coding unit 303 mainly corresponds to the variable-length coding unit 104 according to the Modality 1. Then, the image encoding apparatus 300 encodes the current image per block. Here, the picture coding apparatus 300 performs prediction using one or both of the first and second lists of reference pictures. In other words, the picture coding apparatus 300 performs prediction using one or both of the reference picture indicated by the first reference picture list and the reference picture indicated by the second reference picture list. Fig. 11B is a flowchart of operations performed by the image encoding apparatus 300 in Fig. 11A. First, the adding unit 301 adds the first adjacent motion vector to a list of candidate predicted motion vectors to be used to encode the current motion vector, as a candidate for a predicted motion vector (S701). The first adjacent motion vector is a motion vector of an adjacent block that is adjacent to the current block to be encoded included in the current picture to be encoded. In addition, the first adjacent motion vector indicates a position in a first reference image included in the first reference image list. The current motion vector is a motion vector of the current block. In addition, the current adjacent motion vector indicates a position in a second reference image included in the second reference image list. Next, the selection unit 302 selects a predicted motion vector to be used to encode the current motion vector, from a list of candidates including the first adjacent motion vector (S702). Next, the coding unit 303 encodes the current motion vector using the selected predicted motion vector (S703). In this way, the adjacent motion vector corresponding to the first reference image list is added to the candidate list corresponding to the second reference image list. In this way, the number of predicted motion vector options increases. In this way, it is possible to derive a suitable predicted motion vector to improve the coding efficiency of the current motion vector. In addition, the addition unit 301 can add the second adjacent motion vector to the candidate list. The second adjacent motion vector is a motion vector of an adjacent block, and indicates a position in a third reference image included in the second reference image list. In this way, the adjacent motion vector corresponding to the second reference image list is added to the candidate list corresponding to the second reference image list. In this way, the number of predicted motion vector options increases. In this way, it is possible to derive a suitable predicted motion vector to improve the coding efficiency of the current motion vector. In addition, the addition unit 301 can determine whether or not the second reference picture is identical to the third reference picture. By determining that the second reference image is identical to the third reference image, the addition unit 301 can add the second adjacent motion vector to the candidate list. Furthermore, the addition unit 301 can determine whether or not the second reference image is identical to the first reference image. Then, by determining that the second reference image is identical to the first reference image, the addition unit 301 can add the adjacent first motion vector to the candidate list. In this way, only when the reference image corresponding to the current motion vector is identical to the reference image corresponding to the adjacent motion vector, the adjacent motion vector is added to the candidate list. Thus, only when the adjacent motion vector is suitable as a candidate for a predicted motion vector, is the adjacent motion vector added to the candidate list. Thus, an appropriate predicted motion vector is derived. In addition, the adding unit 301 can determine whether or not the second reference image is identical to the first reference image by determining that the second reference image is not identical to the third reference image. By determining that the second reference picture is not identical with the third reference picture and that the second reference picture is identical with the first reference picture, the adding unit 301 can add the adjacent first motion vector to the candidate list. In this way, when the current motion vector corresponds to the second reference image list, the adjacent motion vector corresponding to the second reference image list is preferably added to the candidate list. Thus, a more suitable adjacent motion vector is added to the candidate list as a candidate for a predicted motion vector. In addition, the adding unit 301 can determine whether or not the second reference image is identical to the third reference image by determining whether the display order of the second reference image is identical to the display order of the third reference image. . In addition, the adding unit 301 can determine whether or not the second reference image is identical to the first reference image by determining whether or not the display order of the second reference image is identical to the display order of the first image. of reference. Here, the first reference image is identified by the first reference image list and the first reference index. In addition, the second reference image is identified by the second reference image list and the second reference index. In addition, the third reference image is identified by the second reference image list and the third reference index. In this way, whether or not the reference image identified by the first reference picture list is identical to the reference picture identified by the second reference picture list is appropriately determined based on the display orders. Furthermore, by determining that the second reference picture is not identical to the third reference picture and that the second reference picture is not identical to the first reference picture, the adding unit 301 can add 0 to the candidate list. In other words, the addition unit 301 can add a motion vector having a magnitude of 0 to the candidate list as a candidate for a predicted motion vector. In this way, a decrease in the number of candidates is suppressed. Thus, a state where there is no candidate on the candidate list is avoided. In addition, the addition unit 301 can add to the candidate list index values and candidates for a predicted motion vector so that the index values are in one-to-one correspondence with the candidates for the predicted motion vector. In addition, the selection unit 302 can select an index value from the candidate list as a predicted motion vector. The coding unit 303 can further encode the selected index value so that the code of the index value is larger as the index value is larger. In this way, the selected predicted motion vector is properly coded. Thus, the encoder and decoder select the same predicted motion vector. In addition, the addition unit 301 can add the first adjacent motion vector of an adjacent block to the candidate list, assuming each of a left adjacent block, an above adjacent block, and an above and right adjacent block with respect. the current block to be encoded is the adjacent block. In this way, a plurality of adjacent motion vectors are added to the candidate list as candidates for the predicted motion vector. In this way, the number of predicted motion vector options increases. Modality 4 Embodiment 4 further describes an image decoding apparatus including the characteristic component elements of the image decoding apparatus 200 according to Embodiment 2. Figure 12A illustrates a configuration of the image decoding apparatus according to Embodiment 4 An image decoding apparatus 400 in Fig. 12A includes an addition unit 401, a selection unit 402 and a decoding unit 403. The addition unit 402 mainly corresponds to the addition determination unit 216 according to Modality 2 The selection unit 402 mainly corresponds to the prediction control unit inter 214 according to Modality 2. The decoding unit 403 mainly corresponds to the variable length decoding unit 204 and the prediction control unit inter 214 according to Modality 2. The image decoding apparatus 400 decodes the current image by block. Here, the image decoding apparatus 400 performs prediction using one or both of the first and second lists of reference images. In other words, the picture decoding apparatus 400 performs prediction using one or both of the reference picture indicated by the first reference picture list and the reference picture indicated by the second reference picture list. Figure 12B is a flowchart of operations performed by the image decoding apparatus 400 in Figure 12A. First, the addition unit 401 adds the first adjacent motion vector to a list of candidate predicted motion vectors to be used to decode the current motion vector as a candidate for a predicted motion vector (S801). The first adjacent motion vector is a motion vector of an adjacent block which is adjacent to the current block to be decoded included in the current picture to be decoded. In addition, the first adjacent motion vector indicates a position in a first reference image included in the first reference image list. The current motion vector is a motion vector of the current block to be decoded. In addition, the current motion vector indicates a position in a second reference image included in the second reference image list. Next, the selection unit 402 selects a predicted motion vector to be used to decode the current motion vector, from a list of candidates including the first adjacent motion vector (S802). Next, the decoding unit 403 decodes the current motion vector using the selected predicted motion vector (S803). In this way, the adjacent motion vector corresponding to the first reference image list is added to the candidate list corresponding to the second reference image list. Also, the number of predicted motion vector options increases. Thus, it is possible to derive a suitable predicted motion vector to improve the coding efficiency of the current motion vector. In addition, the addition unit 401 can add the second adjacent motion vector to the candidate list. The second adjacent motion vector is a motion vector of an adjacent block, and indicates a position in a third reference image included in the second reference image list. In this way, the adjacent motion vector corresponding to the second reference image list is added to the candidate list corresponding to the second reference image list. Also, the number of predicted motion vector options increases. Thus, it is possible to derive a suitable predicted motion vector to improve the coding efficiency of the current motion vector. Furthermore, the addition unit 401 can determine whether or not the second reference picture is identical to the third reference picture. Then, by determining that the second reference image is identical to the third reference image, the addition unit 401 can add the second adjacent motion vector to the candidate list. Furthermore, the addition unit 401 can determine whether or not the second reference image is identical to the first reference image. Then, by determining that the second reference image is identical to the first reference image, the addition unit 401 can add the adjacent first motion vector to the candidate list. In this way, only when the reference image corresponding to the current motion vector is identical to the reference image corresponding to the adjacent motion vector, the adjacent motion vector is added to the candidate list. Thus, only when the adjacent motion vector is suitable as a candidate for a predicted motion vector, is the adjacent motion vector added to the candidate list. Thus, an appropriate predicted motion vector is derived. Furthermore, the addition unit 401 can determine whether or not the second reference image is identical to the first reference image by determining that the second reference image is not identical to the third reference image. By determining that the second reference picture is not identical to the third reference picture and that the second reference picture is identical to the first reference picture, the addition unit 401 can add the adjacent first motion vector to the candidate list. In this way, when the current motion vector corresponds to the second reference image list, the adjacent motion vector corresponding to the second reference image list is preferably added to the candidate list. Thus, a more suitable adjacent motion vector is added to the candidate list as a candidate for a predicted motion vector. In addition, the addition unit 401 can determine whether or not the second reference image is identical to the third reference image by determining whether or not the display order of the second reference image is identical to the display order of the third reference image. . In addition, the adding unit 301 can determine whether or not the second reference image is identical to the first reference image by determining whether or not the display order of the second reference image is identical to the display order of the first image. of reference. Here, the first reference image is identified by the first reference image list and the first reference index. In addition, the second reference image is identified by the second reference image list and the second reference index. In addition, the third reference image is identified by the second reference image list and the third reference index. In this way, whether or not the reference image identified by the first reference picture list is identical to the reference picture identified by the second reference picture list is appropriately determined based on the display orders. Furthermore, by determining that the second reference picture is not identical to the third reference picture and that the second reference picture is not identical to the first reference picture, the addition unit 401 can add 0 to the candidate list. In other words, the addition unit 401 can add a motion vector having a magnitude of 0 to the candidate list as a candidate for a predicted motion vector. In this way, a decrease in the number of candidates is suppressed. Thus, a state where there is no candidate on the candidate list is avoided. In addition, the addition unit 401 can add index and candidate values for a predicted motion vector to the candidate list so that the index values are in one-to-one correspondence with the candidates for the predicted motion vector. The decoding unit 403 can decode the encoded index value so that the code is larger as the index value is larger. Furthermore, the selection unit 402 can select a predicted motion vector corresponding to the decoded index value from the candidate list. In this way, the selected predicted motion vector is properly decoded. Thus, the encoder and decoder select the same predicted motion vector. In addition, the addition unit 401 can add the first adjacent block motion vector of the adjacent block to the candidate list, assuming each of a left adjacent block, an adjacent block above, and an adjacent block above and to the right with respect to the current block to be decoded is the adjacent block. In this way, a plurality of adjacent motion vectors are added to the candidate list as candidates for the predicted motion vector. Thus, the number of predicted motion vector options increases. Modality 5 Modality 5 further describes an image encoding and decoding apparatus including the characteristic component elements of the image encoding apparatus 100 according to Modality 1 and the image decoding apparatus 200 according to Modality 2. The figure 13 illustrates a configuration of the image encoding and decoding apparatus according to Modality 5. An image encoding and decoding apparatus 500 in Fig. 13 includes an addition unit 501, a selection unit 502, an encoding unit 503 and a 504 decoding unit. The addition unit 501 mainly corresponds to the addition determination unit 116 according to Modality 1 and the addition determination unit 216 according to Modality 2. The selection unit 402 mainly corresponds to the inter prediction control unit 114 According to Modality 1 and to the prediction control unit inter 214 according to Modality 2. The coding unit 503 mainly corresponds to the variable-length coding unit 104 according to the Modality 1. The decoding unit 504 corresponds mainly to the variable length decoding unit 204 and the inter prediction control unit 214 according to Modality 2. Then, the image encoding and decoding apparatus 500 encodes the current image by block, and decodes the current image by block. Here, the picture encoding and decoding apparatus 500 performs prediction using one or both of the first and second lists of reference pictures. In other words, the picture encoding and decoding apparatus 500 performs prediction using one or both of the reference picture indicated by the first reference picture list and the reference picture indicated by the second reference picture list. Addition unit 501 adds the first adjacent motion vector to a list of candidate predicted motion vectors to be used to encode or decode the current motion vector as a candidate predicted motion vector. The first adjacent motion vector is a motion vector of an adjacent block that is adjacent to a block to be processed included in the current image to be encoded or decoded. In addition, the first adjacent motion vector indicates a position in a first reference image included in the first reference image list. The current motion vector is a motion vector of the block to be processed. In addition, the current motion vector indicates a position in a second reference image included in the second reference image list. The selection unit 502 selects a predicted motion vector to be used to encode or decode the current motion vector from a list of candidates including the first adjacent motion vector. Encoding unit 503 encodes the current motion vector using the selected predicted motion vector. Decoding unit 504 decodes the current motion vector using the selected predicted motion vector. In this way, the image encoding and decoding apparatus 500 implements both the functions of the image encoding apparatus and those of the image decoding apparatus. Although the image encoding apparatus and the image decoding apparatus according to the present invention are described based on the embodiments, the present invention is not limited to these embodiments. The present invention includes modifications devised by those skilled in the art using the embodiments, and other embodiments arbitrarily combining the component elements included in the embodiments. For example, processes performed by a particular processing unit can be performed by another processing unit. Furthermore, the order of execution of processes can be changed, and a plurality of processes can be executed in parallel. Furthermore, the present invention can be implemented not only as an image encoding apparatus and an image decoding apparatus, but also as a method using the steps and processes performed by the processing units included in the image encoding apparatus and in the image decoding device. For example, such steps are performed by a computer. Furthermore, the present invention can be implemented to induce a computer to perform the steps included in the method as a program. Furthermore, the present invention can be implemented as a computer readable recording medium, such as a CD-ROM which records the program. In this way, the image encoding apparatus and the image decoding apparatus are implemented as an image encoding and decoding apparatus by combining the component elements of the image encoding apparatus and the image decoding apparatus. Furthermore, each of the component elements included in the image encoding apparatus and the image decoding apparatus can be implemented as a Large Scale Integration (LSI). Component elements can be built on one chip or a plurality of chips to include all or a portion of the component elements. For example, component elements other than memory can be integrated into a single chip. The name used here is LSI, but it can also be called IC, LSI system, super LSI or ultra LSI depending on the degree of integration. Furthermore, ways to achieve integration are not limited to LSI, and a special circuit or a general-purpose processor and so on can also achieve integration. It is also acceptable to use a Field Programmable Gate Array (FPGA) that is programmable, and a reconfigurable processor in which connections and circuit cell arrangements within the LSI are reconfigurable. In the future, as semiconductor technology advances, a brand new technology may replace LSI. The component elements included in the image encoding apparatus and the image decoding apparatus can be integrated into a circuit using such technology. Modality 6 The processing described in each of the modalities can be implemented simply by recording, on a recording medium, a program to implement the moving image encoding method (picture encoding method) or the moving image decoding method ( image decoding method) described in each of the modalities. The recording media can be any recording media as long as the program can be recorded on it, such as a magnetic disk, an optical disk, an optical magnetic disk, an IC card, and a semiconductor memory. Next, the applications for the moving image encoding method (picture encoding method) and the moving image decoding method (image decoding method) described in each of the modalities and systems using the same will be described. The system is characterized by including an image encoding and decoding apparatus including an image encoding apparatus using an image encoding method and an image decoding apparatus using an image decoding method. Other configuration in the system can be changed appropriately according to each individual case. Figure 14 illustrates an overall configuration of an ex100 content delivery system for implementing content delivery services. The area for providing communication services is divided into cells of desired size, and base stations ex106 to ex110 which are fixed wireless stations are located in each of the cells. The ex100 content delivery system is connected to devices such as a computer ex111, a personal digital assistant (PDA) ex112, a camera ex113, a cell phone ex114 and a gaming machine ex115, via the internet ex101, a provider ex102 Internet services, an ex104 telephone network, as well as ex106 to ex110 base stations. However, the configuration of the ex100 content delivery system is not limited to the configuration shown in figure 14, and a combination in which either element is connected is acceptable. Furthermore, each of the devices can be connected directly to the telephone network ex104 instead of through the base stations ex106 to ex110 which are the fixed wireless stations. In addition, devices can be interconnected to each other through short distance wireless communication and others. The ex113 camera, like a digital video camera, is capable of capturing video. An ex116 camera, like a digital video camera, is capable of capturing both still images and video. In addition, the ex114 cell phone can be a phone that meets any of the standards such as Global System for Mobile Communications (GSM), Code Division Multiple Access (CDMA), Broadband Code Division Multiple Access ( W-CDMA), Long Term Evolution (LTE) and High Speed Packet Access (HSPA). Alternatively, the ex114 cell phone may be a Personal Portable Telephone System (PHS). In the ex100 content delivery system, an ex103 streaming server is connected to the ex113 camera and others via the ex104 telephone network and the ex109 base station, which enables distribution of a live show and others. For a distribution like this, content (eg video of a live music show) captured by the user using the ex113 camera is encoded as described above in each of the modalities, and the encoded content is streamed to the server. continuous stream ex103. On the other hand, the ex103 streaming server performs streaming distribution of incoming content data to clients upon their requests. Customers include ex111 computer, ex112 PDA, ex113 camera, ex114 cell phone, and ex115 gaming machine that are capable of decoding the encoded data mentioned above. Each of the devices that have received the distributed data decodes and reproduces the encoded data (i.e. functions as an image decoding apparatus in accordance with the present invention). The captured data can be encoded by the ex113 camera or the ex103 streaming server that transmits the data, or the encoding processes can be shared between the ex113 camera and the ex103 streaming server. Similarly, distributed data can be decoded by the clients or by the ex103 streaming server, or the decoding processes can be shared between the clients and the ex103 streaming server. Furthermore, the still image and video data captured not only by the ex113 camera but also by the ex116 camera can be transmitted to the ex103 streaming server via the ex111 computer. Encoding processes can be performed by the ex116 camera, the ex111 computer or the ex103 streaming server, or shared between them. Furthermore, encoding and decoding processes can be performed by an LSI ex500 generally included in each of the ex111 computer and devices. The LSI ex500 can be configured from a single chip or a plurality of chips. Software to encode and decode images can be integrated into some type of recording medium (such as a CD-ROM, a floppy disk, a hard disk) that is readable by the computer ex111 and others, and the encoding and decoding processes can be run using the software. Furthermore, when the ex114 cell phone is equipped with a camera, the moving image data obtained by the camera can be transmitted. Video data is data encoded by the LSI ex500 included in the ex114 cell phone. Furthermore, the ex103 streaming server can be composed of servers and computers, and can decentralize data and process the decentralized data, record, or distribute data. As described above, customers can receive and reproduce the encoded data in the content delivery system ex100. In other words, customers can receive and decode information transmitted by the user, and reproduce the decoded data in real time on the ex100 content delivery system, so that the user who does not have any rights and particular equipment can implement personal broadcast. With the exception of the example of the content delivery system ex100, at least one of the moving picture coding apparatus (picture coding apparatus) and the moving picture decoding apparatus (picture decoding apparatus) described in each one of the modalities can be implemented in a digital broadcasting system ex200 illustrated in figure 15. More specifically, a broadcasting station ex201 communicates or transmits, via radio waves to a broadcast satellite ex202, multiplexed data obtained by multiplexing audio data and others in video data. Video data is data encoded by the moving picture encoding method described in each of the embodiments (i.e. data encoded by the picture encoding apparatus in accordance with the present invention). Upon receipt of the multiplexed data, the broadcast satellite ex202 transmits radio waves for broadcast. Then, a residential use antenna ex204 with a satellite broadcast reception function receives the radio waves. Next, a device, such as a television (receiver) ex300 and a signal converter (STB) apparatus ex217, decodes the received multiplexed data and reproduces the decoded data (i.e. functions as the image decoding apparatus in accordance with the present invention). In addition, an ex218 reader/writer that (i) reads and decodes multiplexed data recorded on an ex215 recording media such as a DVD and a BD, or (ii) encodes video signals on the ex215 recording media and, on in some cases, recording data obtained by multiplexing an audio signal into the encoded data may include the moving image decoding apparatus or the moving picture encoding apparatus as shown in each of the embodiments. In this case, the reproduced video signals are displayed on the monitor ex219, and can be played back by another device or system using the recording media ex215 on which the multiplexed data is recorded. Furthermore, it is also possible to implement the image decoding apparatus in the signal converter apparatus ex217 connected to the cable ex203 for a cable television or to the antenna ex204 for satellite and/or terrestrial broadcasting, in order to display the video signals on the ex219 monitor from ex300 television. The moving picture decoding apparatus can be included not in the signal converter apparatus, but in the ex300 television. Figure 16 illustrates the television (receiver) ex300 using the moving picture encoding method and the moving picture decoding method described in each of the embodiments. The ex300 television includes: an ex301 tuner which obtains or provides multiplexed data obtained by multiplexing audio data into video data, via the ex204 antenna or the ex203 cable, etc. that receives a broadcast; a modulation/demodulation unit ex302 which demodulates the received multiplexed data or modulates data into multiplexed data to be supplied out; and a multiplexing/demultiplexing unit ex303 which demultiplexes the modulated data multiplexed into video data and audio data, or multiplexes video data and audio data encoded by a signal processing unit ex306 into data. The television ex300 further includes: a signal processing unit ex306 including an audio signal processing unit ex304 and a video signal processing unit ex305 (functioning as the image encoding apparatus or the video decoding apparatus. image according to the present invention) which decode audio data and video data and encode audio data and video data, respectively; an ex307 speaker that provides the decoded audio signal; and an output unit ex309 including a display unit ex308 which displays the decoded video signal such as a display. In addition, the ex300 television includes an ex317 interface unit including an ex312 operator input unit that receives input from a user operation. In addition, the ex300 television includes a control unit ex310 that fully controls each component element of the ex300 television, and a power supply circuit unit ex311 that supplies power to each of the elements. Other than the ex312 operator input unit, the ex317 interface unit may include: an ex313 bridge that is connected to an external device such as the ex218 reader/writer; an ex314 docking unit to enable docking of ex216 recording media, such as an SD card; an ex315 driver to be connected to an external recording medium such as a hard drive; and an ex316 modem to be connected to a telephone network. Here, the ex216 recording media can electrically record information using a non-volatile/volatile semiconductor memory element for storage. The component elements of the ex300 television are connected to each other via a synchronous bus. First, a configuration in which the television ex300 decodes data obtained from outside via the antenna ex204 and others and reproduces the decoded data will be described. On the ex300 television, upon user operation of a remote controller ex220 and others, the multiplexing/demultiplexing unit ex303 demultiplexes the multiplexed data demodulated by the modulation/demodulation unit ex302, under control of the control unit ex310 including a CPU. Furthermore, the audio signal processing unit ex304 decodes the demultiplexed audio data, and the video signal processing unit ex305 decodes the demultiplexed video data, using the decoding method described in each of the embodiments in the television ex300. The ex309 output unit provides the decoded video signal and audio signal out. When the output unit ex309 supplies the video signal and the audio signal, the signals can be temporarily stored in the temporary stores ex318 and ex319, and in others so that the signals are reproduced in synchronization with each other. In addition, the ex300 television can read an encoded bit stream not via a broadcast and others, but by ex215 and ex216 recording media such as a magnetic disk, an optical disk and an SD card. In the following, a configuration in which the ex300 television encodes an audio signal and a video signal, and either transmits the data out or writes the data to a recording medium will be described. On the ex300 television, upon user operation by the remote controller ex220 and others, the audio signal processing unit ex304 encodes an audio signal, and the video signal processing unit ex305 encodes a video signal, under control of the control unit ex310 using the encoding method described in each of the embodiments. The multiplexing/demultiplexing unit ex303 multiplexes the encoded video signal and audio signal, and outputs the resulting signal out. When the multiplexing/demultiplexing unit ex303 multiplexes the video signal and the audio signal, the signals can be temporarily stored in the temporary storages ex320 and ex321 and elsewhere so that the signals are reproduced in synchronization with each other. Here, the ex318, ex319, ex320 and ex321 temporary storages can be multiple as illustrated, or at least one temporary storage can be shared on the ex300 television. Furthermore, data can be stored in a temporary storage other than the temporary stores ex318 to ex321 so that negative system overload and overcapacity can be avoided between the modulation/demodulation unit ex302 and the multiplexing unit /ex303 demultiplexing, for example. In addition, the ex300 television can include a setting to receive an AV input from a microphone or camera other than the setting to get audio and video data from a broadcast or recording media, and can encode the data. obtained. Although the ex300 television can encode, multiplex, and output data out in the description, it may not be able to perform all the processes, but be capable of only one of receiving, decoding, and outputting data. In addition, when the ex218 reader/writer reads or writes multiplexed data onto a recording medium, one of the ex300 television and the ex218 reader/writer can decode or encode the multiplexed data, and the ex300 television and the reader/writer ex218 can share decoding or encoding. As an example, Fig. 17 illustrates a configuration of an information playback/recording unit ex400 when data is read from or written to an optical disc. The ex400 information reproduction/recording unit includes the component elements ex401, ex402, ex403, ex404, ex405, ex406 and ex407 to be described below. The optical head ex401 radiates a laser spot on a recording surface of the recording media ex215 which is an optical disc for recording information, and detects light reflected by the recording surface of the recording media ex215 to read the information. The ex402 modulation recording unit electrically drives a semiconductor laser included in the ex401 optical head, and modulates the laser light according to recorded data. The playback demodulation unit ex403 amplifies a playback signal obtained by electrically detecting light reflected from the recording surface using a photodetector included in the optical head ex401, and demodulates the playback signal by separating a signal component recorded on the recording media ex215 to reproduce the necessary information. The ex404 temporary storage temporarily holds the information to be recorded on the recording media ex215 and the information played back from the recording media ex215. An ex405 disk engine spins the ex215 recording media. A servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotation drive of the ex405 disk motor in order to follow the laser spot. The ex407 system control unit fully controls the ex400 information playback/recording unit. The read and write processes can be implemented by the system control unit ex407 using various information stored in the temporary storage ex404 and generating and adding new information as needed, and by the modulating recording unit ex402, the reproducing demodulation unit ex403 and by the servo control unit ex406 which record and reproduce information by means of the optical head ex401 while being operated in a coordinated mode. The ex407 system control unit includes, for example, a microprocessor, and performs processing by causing a computer to run a program for reading and writing. Although the ex401 optical head radiates a laser spot in the description, it can perform high-density engraving using near-field light. Figure 18 schematically illustrates the recording medium ex215 which is the optical disc. On the recording surface of recording media ex215, guide grooves are created in the form of a spiral, and an information track ex230 records, in advance, address information indicating an absolute position on the disk according to change in a shape of the guide grooves. . The address information includes information for determining the positions of recording blocks ex231 which is a unit for recording data. An apparatus which records and reproduces data reproduces the information track ex230 and reads the address information in order to determine the positions of the recording blocks. In addition, recording media ex215 includes a data recording area ex233, an inner circumference area ex232, and an outer circumference area ex234. The ex233 data recording area is an area to use when recording user data. The inner circumference area ex232 and the outer circumference area ex234 which are inside and outside the data recording area ex233, respectively, are for specific use except for recording user data. The information recording/reproduction unit ex400 reads and writes encoded audio data, encoded video data, or encoded data obtained by multiplexing the encoded audio data and the encoded video data, in the data recording area ex233 of the media. ex215 recording. Although an optical disc having one layer, such as a DVD and a BD, is described as an example in the description, the optical disc is not limited thereto, and may be an optical disc having a multilayer structure and capable of being recorded. on a part other than the surface. Furthermore, the optical disc may have a structure for multidimensional recording/reproduction, such as recording information using color light with different wavelengths on the same part of the optical disc and recording information having different layers from various angles. Furthermore, a car ex210 having an antenna ex205 can receive data from the satellite ex202 and others, and play video on a display device such as a car navigation system ex211 installed on the car ex210, on the digital broadcast system ex200. Here, a configuration of the car navigation system ex211 will be the configuration including, for example, a GPS receiving unit in the configuration illustrated in Figure 16. The same will be true for the configuration of the computer ex111, the cell phone ex114 and others. Fig. 19A illustrates cell phone ex114 using the moving picture encoding method and the moving picture decoding method described in the embodiments. The cell phone ex114 includes: an antenna ex350 for transmitting and receiving radio waves via the base station ex110; an ex365 camera unit capable of capturing moving and still images; and a display unit ex358 such as a liquid crystal display for displaying data such as decoded video captured by the camera unit ex365 or received via the antenna ex350. The cell phone ex114 further includes: a main body unit including a set of operating keys ex366; an audio output unit ex357 such as a speaker for audio output; an audio input unit ex356 such as a microphone for audio input; a memory unit ex367 for storing captured video or still images, recorded audio, encoded or decoded data from the received video, the still images, electronic mail, or others; and an ex364 plug-in unit which is an interface unit to a recording medium that stores data in the same mode as the ex367 memory unit. In the following, an example of a cell phone configuration ex114 will be described with reference to Fig. 19B. In the cell phone ex114, an ex360 main control unit designed to fully control each main body unit including the ex358 display unit as well as the ex366 operating keys is mutually connected, via a synchronous bus ex370, to a circuit unit ex361 power supply unit, to an operator input control unit ex362, to a video signal processing unit ex355, to a camera interface unit ex363, to a liquid crystal display (LCD) control unit ex359, to a modulation/demodulation unit ex352, to a multiplexing/demultiplexing unit ex353, to an audio signal processing unit ex354, to the plug-in unit ex364 and to the memory unit ex367. When an end-call button and a power button are turned ON via a user operation, the power supply circuit unit ex360 supplies the respective units with power from a battery module in order to activate the cell phone ex114 which is digital and is equipped with the camera. In the cell phone ex114, the audio signal processing unit ex354 converts the audio signals collected by the audio input unit ex356 in voice conversation mode to digital audio signals under the control of the main control unit ex360 including a CPU , ROM and RAM. Then, the modulation/demodulation unit ex352 performs spread spectrum processing on the digital audio signals, and the transmit and receive unit ex351 performs digital-to-analog conversion and frequency conversion on the data, in order to transmit the resulting data via of the antenna ex350. Also, in the cell phone ex114, the transmit and receive unit ex351 amplifies the data received by the antenna ex350 in voice conversation mode and performs frequency conversion and analog to digital conversion on the data. Then, the modulation/demodulation unit ex352 performs inverse spread spectrum processing on the data, and the audio signal processing unit ex354 converts them to analog audio signals in order to produce the data through the output unit. audio ex356. In addition, when an e-mail in data communication mode is transmitted, e-mail message text data entered by operating the ex366 and other operation keys of the main body is sent out of the ex360 main control unit by means of operation input control unit ex362. Then, the modulation/demodulation unit ex352 performs spread spectrum processing on the digital audio signals, and the transmit and receive unit ex351 performs digital-to-analog conversion and frequency conversion on the data, in order to transmit the resulting data via of the antenna ex350. When an email message is received, processing that is approximately the inverse of processing to transmit an email message is performed on the received data, and the resulting data is provided to the ex358 display unit. When video, still images, or video and audio in data communication mode is or is transmitted, the ex355 video signal processing unit compresses and encodes video signals provided by the ex365 camera unit using the image encoding method in motion shown in each of the embodiments (i.e. functioning as the picture encoding apparatus in accordance with the present invention), and transmits the encoded video data to the multiplexing/demultiplexing unit ex353. Conversely, when the ex365 camera unit captures video, still images and so on, the ex354 audio signal processing unit encodes audio signals collected by the ex356 audio input unit, and transmits the encoded audio data to the audio unit. ex353 multiplexing/demultiplexing. The multiplexing/demultiplexing unit ex353 multiplexes the encoded video data provided by the video signal processing unit ex355 and the encoded audio data provided by the audio signal processing unit ex354, using a predetermined method. Then, the modulation/demodulation unit (modulation/demodulation circuit unit) ex352 performs spread spectrum processing on the digital audio signals, and the transmit and receive unit ex351 performs digital-to-analog conversion and frequency conversion on the data, in order to transmit the resulting data through the ex350 antenna. When receiving data from a video file that is linked to a Web page and to others in data communication mode or when receiving an email message with video and/or audio attached, in order to decode the multiplexed data received by By means of the antenna ex350, the multiplexing/demultiplexing unit ex353 demultiplexes the data multiplexed into a video data bitstream and an audio data bitstream, and provides the video signal processing unit ex355 with the data of encoded video and the audio signal processing unit ex354 with the encoded audio data via the synchronous bus ex370. The video signal processing unit ex355 decodes the video signal using a moving picture decoding method corresponding to the moving picture encoding method shown in each of the embodiments (i.e., functioning as the picture decoding apparatus according to the present invention), and then the display unit ex358 displays, for example, the video and still images included in the video file linked to the Web page by means of the LCD control unit ex359. Furthermore, the audio signal processing unit ex354 decodes the audio signal, and the audio output unit ex357 provides the audio. Furthermore, similarly to the ex300 television, a terminal such as the cell phone ex114 can have 3 types of implementation configurations including not only (i) a transmit and receive terminal including both an encoding apparatus and a decoding apparatus, but also (ii) a transmit terminal including only one encoding apparatus and (iii) a receiving terminal including only one decoding apparatus. Although the digital broadcasting system ex200 receives and transmits the multiplexed data obtained by multiplexing audio data into video data in the description, the multiplexed data can be data obtained by multiplexing not audio data, but video-related character data into video data. video, and may not be multiplexed data, but actual video data. As such, the moving image encoding method and the moving image decoding method in each of the modalities can be used in any of the devices and systems described. Thus, the advantages described in each of the modalities can be obtained. Furthermore, the present invention is not limited to the Modalities, and various modifications and revisions are possible without departing from the scope of the present invention. Modality 7 Video data can be generated by switching, as necessary, between (i) the moving image encoding method or the moving image encoding apparatus shown in each of the modalities and (ii) an image encoding method motion picture encoding apparatus conforming to a different standard such as MPEG-2, MPEG-4 AVC and VC-1. Here, when a plurality of video data that conforms to the different standards is generated and is then decoded, the decoding methods need to be selected to conform to the different standards. However, since which standard each of the plurality of video data to be decoded conforms cannot be identified, there is a problem where an appropriate decoding method cannot be selected. In order to solve the problem, multiplexed data obtained by multiplexing audio and other data into video data has a structure including identifying information indicating which standard the video data conforms to. The specific structure of the multiplexed data including the video data generated in the moving picture coding method and by the moving picture coding apparatus shown in each of the embodiments will be described below. The multiplexed data is a digital stream in MPEG2 Transport Stream format. Figure 20 illustrates a structure of the multiplexed data. As illustrated in Figure 20, multiplexed data can be obtained by multiplexing at least one of a video stream, an audio stream, a presentation graphics (PG) stream and an interactive graphics stream. The video stream represents the primary video and secondary video of a movie, the audio stream (IG) represents a primary audio part and a secondary audio part to be mixed with the primary audio part, and the presentation graphics stream represents subtitles of a movie. Here, primary video is normal video to be displayed on one screen, and secondary video is video to be displayed in a smaller window on the main video. Furthermore, the interactive graphics flow represents an interactive screen to be generated by arranging the GUI components on a screen. The video stream is encoded in the moving picture coding method or by the moving picture coding apparatus shown in each of the modalities, or in a moving picture coding method or by a moving picture coding apparatus conforming to a conventional standard such as MPEG-2, MPEG-4 AVC and VC-1. The audio stream is encoded according to a standard such as Dolby-AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD and linear PCM. Each stream included in the multiplexed data is identified through PID. For example, 0x1011 is allocated for the video stream to be used for a movie video, 0x1100 to 0x111F is allocated for the audio streams, 0x1200 to 0x121 F is allocated for the presentation graphics streams, 0x1400 to 0x141F is allocated for the interactive graphics streams, 0x1 B00 to 0x1 B1F is allocated for the video streams to be used for secondary video of the movie, and 0x1 A00 to 0x1 A1F is allocated for the audio streams to be used for the secondary video for be mixed with the primary audio. Figure 21 schematically illustrates how data is multiplexed. First, a video stream ex235 composed of video frames and an audio stream ex238 composed of audio frames are transformed into a PES packet stream ex236 and a PES packet stream ex239, and further into TS packets ex237 and packets TS ex240, respectively. Similarly, data from an ex241 presentation graphics stream and data from an ex244 interactive graphics stream are transformed into a PES packet stream ex242 and a PES packet stream ex245, and further into TS packets ex243 and TS packets ex246, respectively. These TS packets are multiplexed into a stream to obtain the multiplexed data ex247. Figure 22 illustrates how a video stream is stored in a PES packet stream in more detail. The first bar in Figure 22 shows a stream of video frames within a video stream. The second bar shows the PES packet flow. As indicated by the arrows denoted yy2, yy2, yy3 and yy4 in Figure 22, the video stream is divided into images such as I images, B images and P images, each of which is a video display unit , and the images are stored in a payload of each of the PES packets. Each of the PES packets has a PES header, and the PES header stores a presentation time tag (PTS) indicating a picture display time, and a decoding time tag (DTS) indicating a picture decoding time. Fig. 23 illustrates a format of TS packets to be finally recorded in the multiplexed data. Each of the TS packets is a fixed length 188 bytes packet including a 4 byte TS header having information such as a PID to identify a stream and a 184 byte TS payload to store data. PES packets are split and stored in TS payloads respectively. When a BD ROM is used, each of the TS packets is given a 4-byte TP_Extra_Header, thus resulting in 192-byte source packets. Source packets are written to the multiplexed data. The TP_Extra_Header stores information such as an Arrival_Time_Stamp (ATS). ATS shows a transfer start time at which each of the TS packets is to be transferred to a PID filter. The source packets are arranged in the multiplexed data as shown at the bottom of Figure 23. The numbers incrementing in the head of the multiplexed data are called source packet numbers (SPNs). Each of the TS packages included in the multiplexed data includes not only audio, video, caption and other streams, but also a Program Association Table (PAT), a Program Mapping Table (PMT) and a Program Clock Reference (P-CR). PAT shows what a PID in a PMT used in the multiplexed data indicates, and a PID in the PAT itself is recorded as zero. The PMT stores PIDs of the video, audio, subtitle and other streams included in the multiplexed data, and attribute information of the streams corresponding to the PIDs. PMT also has several descriptors relating to the multiplexed data. The descriptors have information such as copy control information showing whether copying of the multiplexed data is allowed or not. The PCR stores STC time information corresponding to an ATS showing when the PCR packet is transferred to a decoder, in order to achieve synchronization between an Arrival Time Clock (ATC) which is a time axis of ATSs, and a Time of Arrival Clock (ATC) Common System (STC) which is a time axis of PTSs and DTSs. Figure 24 illustrates the PMT data structure in detail. A PMT header is placed on top of the PMT. The PMT header describes the length of data included in the PMT and others. A plurality of descriptors relating to the multiplexed data is arranged after the PMT header. Information such as copy control information is described in descriptors. After the descriptors a plurality of stream information pieces relating to the streams included in the multiplexed data is arranged. Each piece of stream information includes stream descriptors, each describing information such as a stream type for identifying a compression encoder/decoder of a stream, a stream PID and stream attribute information (such as a rate of frames or an aspect ratio). The stream descriptors are equal in number to the number of streams in the multiplexed data. When multiplexed data is recorded on one recording medium and others, it is recorded along with multiplexed data information files. Each of the multiplexed data information files is multiplexed data management information as shown in Fig. 25. The multiplexed data information files are in one-to-one correspondence with the multiplexed data, and each of the files includes multiplexed data information. multiplexed data, stream attribute information and an input map. As illustrated in Fig. 25, the multiplexed data includes a system rate, a playback start time, and a playback end time. The system rate indicates the maximum transfer rate at which a target system decoder to be described later transfers the multiplexed data to a PID filter. The ranges of the ATSs included in the multiplexed data are set to be no greater than a system rate. Playback start time indicates a PTS in a video frame at the head of the multiplexed data. An interval of one frame is added to a PTS in a video frame at the end of the multiplexed data, and the PTS is set to the end of play time. As shown in Fig. 26, a piece of attribute information is recorded in the stream attribute information for each PID of each stream included in the multiplexed data. Each piece of attribute information has different information depending on whether the corresponding stream is a video stream, an audio stream, a presentation graphics stream, or an interactive graphics stream. Each piece of video stream attribute information carries information including which type of compression encoder/decoder is used to compress the video stream, and the resolution, aspect ratio and frame rate of the pieces of image data that are included in the video stream. Each piece of audio stream attribute information carries information including what type of compression encoder/decoder is used to compress the audio stream, how many channels are included in the audio stream, what language the audio stream supports and how much high is the sampling frequency. The video stream attribute information and the audio stream attribute information are used to initialize a decoder before the player plays the information. In Modality 7, the multiplexed data to be used is of a stream type included in the PMT. Furthermore, when multiplexed data is recorded on a recording medium, the video stream attribute information included in the multiplexed data information is used. More specifically, the moving picture coding method or moving picture coding apparatus described in each of the embodiments includes a step or a unit for allocating unique information indicating video data generated by the moving picture coding method or by the moving picture coding apparatus in each of the modalities, for the type of stream included in the PMT or in the video stream attribute information. With the arrangement, video data generated by the moving image encoding method or by the moving image encoding apparatus described in each of the embodiments can be distinguished from video data that conforms to another standard. Furthermore, Fig. 27 illustrates steps of the moving picture decoding method according to Modality 7. In step exS100, the stream type included in the PMT or video stream attribute information is obtained from the multiplexed data. Next, in step exS101, it is determined whether the stream type or the video stream attribute information indicates that the multiplexed data is or is not generated by the moving picture coding method or by the moving picture coding apparatus in each of the modalities. When it is determined that the type of stream or the video stream attribute information indicates that the multiplexed data is generated by the moving picture coding method or by the moving picture coding apparatus in each of the modalities, in step exS102 , the stream type or video stream attribute information is decoded by the moving picture decoding method in each of the modalities. Furthermore, when the stream type or video stream attribute information indicates conformance with conventional standards such as MPEG-2, MPEG-4 AVC and VC-1, in step exS103, the stream type or information Video stream attribute is decoded by means of a moving image decoding method in accordance with conventional standards. As such, allocating a new unique value for the stream type or video stream attribute information enables determination of whether the moving picture decoding method or the moving picture decoding apparatus that is described in each. of the modalities may or may not perform decoding. Even upon multiplexed data input that conforms to a different standard, an appropriate decoding method or apparatus can be selected. Thus, it becomes possible to decode information without any error. Furthermore, the moving image encoding method or apparatus, or the moving image decoding method or apparatus in Modality 7 can be used in the devices and systems described above. Modality 8 Each of the moving picture coding method, the moving picture coding apparatus, the moving picture decoding method and the moving picture decoding apparatus in each of the modalities is typically achieved in the form of a integrated circuit or a Large Scale Integrated Circuit (LSI). As an example of LSI, Figure 28 illustrates an LSI ex500 configuration that is built on a chip. The LSI ex500 includes the elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508 and ex509 to be described below, and the elements are connected to each other via an ex510 bus. Power supply circuit unit ex505 is activated by providing each of the elements with power when power supply circuit unit ex505 is turned on. For example, when encoding is performed, the LSI ex500 receives an AV signal from an ex117 microphone, an ex113 camera and others via an ex509 AV input/output under control of an ex501 control unit including an ex502 CPU, a ex503 memory controller, an ex504 flow controller and an ex512 drive frequency control unit. The received AV signal is temporarily stored in an ex511 external memory, such as SDRAM. Under control of the control unit ex501, the stored data is segmented into data pieces according to the amount and computation speed to be transmitted to a signal processing unit ex507. Then, the signal processing unit ex507 encodes an audio signal and/or a video signal. Here, the encoding of the video signal is the encoding described in each of the modalities. Furthermore, the signal processing unit ex507 sometimes multiplexes the encoded audio data and the encoded video data, and an input/output stream ex506 provides the multiplexed data out. The multiplexed data provided is transmitted to base station ex107, or recorded on recording media ex215. When datasets are multiplexed, the datasets must be temporarily stored in temporary storage ex508 so that the datasets are synchronized with each other. Although ex511 memory is an element outside of the LSI ex500, it can be included in the LSI ex500. Ex508 temporary storage is not limited to temporary storage, but can be made up of temporary stores. Additionally, the LSI ex500 can be built on a chip or on a plurality of chips. Furthermore, although the ex501 control unit includes the ex502 CPU, the ex503 memory controller, the ex504 flow controller, the ex512 drive frequency control unit, the configuration of the ex501 control unit is not limited to this. For example, the signal processing unit ex507 can additionally include a CPU. Adding another CPU to the ex507 signal processing unit can improve the processing speed. Furthermore, as another example, the CPU ex502 may serve as or be a part of the signal processing unit ex507 and, for example, may include an audio signal processing unit. In such a case, the control unit ex501 includes the signal processing unit ex507 or the CPU ex502 including a part of the signal processing unit ex507. The name used here is LSI, but it can also be called IC, LSI system, super LSI or ultra LSI depending on the degree of integration. Furthermore, ways to achieve integration are not limited to LSI, and a special circuit or a general-purpose processor and so on can also achieve integration. Field Programmable Ports Array (FPGA) that can be programmed after manufacturing LSIs, or a reconfigurable processor that allows connection reconfiguration or configuration of an LSI can be used for the same purpose. In the future, as semiconductor technology advances, a brand new technology may replace LSI. Function blocks can be integrated using technology like this. The possibility is that the present invention is applied to biotechnology. Modality 9 When video data is decoded in the moving image decoding method or by the moving image decoding apparatus described in each of the embodiments, compared to when video data that conforms to a conventional standard, such like MPEG-2, MPEG-4 AVC, and VC-1, the amount of computation likely increases. Thus, the LSI ex500 needs to be set to a drive frequency higher than that of the CPU ex502 to be used when video data conforming to the conventional standard is decoded. However, when the drive frequency is set higher, there is a problem where power consumption increases. In order to solve the problem, the moving picture decoding apparatus, such as the television ex300 and the LSI ex500, is configured to determine which standard the video data conforms to, and to switch between the driving frequencies accordingly. with the given pattern. Figure 29 illustrates an ex800 configuration in Modality 9. A drive frequency switching unit ex803 sets a drive frequency to a higher drive frequency when video data is generated by the moving picture encoding method or by the recording apparatus. moving image encoding described in each of the modalities. Then, the drive frequency switching unit ex803 instructs a decoding processing unit ex801 that performs the moving picture decoding method described in each of the embodiments to decode the video data. When the video data is in accordance with the conventional standard, the drive frequency switching unit ex803 sets a drive frequency to a drive frequency lower than that of the video data generated by the image encoding method in motion or by the moving image encoding apparatus described in each of the embodiments. Then, the drive frequency switching unit ex803 instructs the decoding processing unit ex802 that it is in accordance with the conventional standard to decode the video data. More specifically, the ex803 drive frequency switching unit includes the ex502 CPU and the ex582 drive frequency control unit in Fig. 28. Here, each of the ex802 decoding processing unit which performs the video decoding method described in each of the embodiments and the decoding processing unit ex802 which conforms to the conventional standard corresponds to the signal processing unit ex507 in Fig. 28. The CPU ex502 determines which standard the video data conforms to. Then the drive frequency control unit ex512 determines a drive frequency based on a signal from the CPU ex502. In addition, the signal processing unit ex507 decodes video data based on a signal from the CPU ex502. For example, the identification information described in Modality 7 is likely used to identify the video data. Identifying information is not limited to the information described in Modality 7, but can be any information as long as the information indicates which standard the video data conforms to. For example, when which standard the video data conforms to can be determined based on an external signal to determine which video data is used for a television or for a disc, etc., the determination can be made based on on an external signal like this. In addition, the ex502 CPU selects a trigger frequency based, for example, on a look-up table where the patterns of the video data are associated with the trigger frequencies as shown in figure 31. The trigger frequency can be selected when storing the lookup table in temporary storage ex508 and in an internal memory of an LSI and referenced to the lookup table by CPU ex502. Fig. 30 illustrates steps for executing a method in Modality 9. First, in step exS200, the signal processing unit ex507 obtains identification information from the multiplexed data. Next, in step exS201, CPU ex502 determines whether or not video data is generated based on the identification information by the encoding method and encoding apparatus described in each of the embodiments. When video data is generated by the encoding method and encoding apparatus described in each of the embodiments, in step exS202, the CPU ex502 transmits a signal to set the drive frequency to a higher drive frequency to the drive unit. ex512 drive frequency control. Then the drive frequency control unit ex512 sets the drive frequency to the highest drive frequency. On the other hand, when the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC and VC-1, in step exS203, the CPU ex502 transmits a signal to set the drive frequency to a lower drive frequency for the ex512 drive frequency control unit. Then, the drive frequency control unit ex512 sets the drive frequency to the drive frequency lower than that in the case where the video data is generated by the encoding method and encoding apparatus described in each of the embodiments. Furthermore, together with switching drive frequencies, the energy conservation effect can be improved by changing the voltage to be applied to the LSI ex500 or to a device including the LSI ex500. For example, when the drive frequency is set lower, the voltage to be applied to the LSI ex500 or the device including the LSI ex500 is likely set to a voltage lower than that in the case where the drive frequency is set higher. In addition, when the amount of computation for decoding is higher, the trigger frequency can be set higher, and when the amount of computation for decoding is lower, the trigger frequency can be set lower as the method for setting the trigger frequency . Thus, the method of establishment is not limited to the methods described above. For example, when the amount of computation to decode video data conforming to MPEG-4 AVC is greater than the amount of computation to decode video data generated by the moving image encoding method and the image encoding apparatus in motion described in each of the modalities, the triggering frequency is probably set in the reverse order of the setting described above. Also, the method for setting the trigger frequency is not limited to the method for setting the lowest trigger frequency. For example, when the identification information indicates that the video data is generated by the moving picture coding method and moving picture coding apparatus described in each of the embodiments, the voltage to be applied to the LSI ex500 or to the appliance including the LSI ex500 is probably established larger. When the identifying information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC and VC-1, the voltage to be applied to the LSI ex500 or apparatus including the LSI ex500 is likely is established smaller. As another example, when the identification information indicates that the video data is generated by the moving image encoding method and the moving image encoding apparatus described in each of the embodiments, the ex502 CPU drive probably does not have to be suspended. When the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC and VC-1, the ex502 CPU powering is likely to be suspended at a given time because the ex502 CPU has extra processing power. Even when the identification information indicates that the video data is generated by the moving picture coding method and moving picture coding apparatus described in each of the embodiments, in the case where the CPU ex502 may have a time delay , the ex502 CPU powering is probably suspended at some point. In a case like this, the suspend time is probably set less than that in the case where the identifying information indicates that the video data conforms to conventional standard such as MPEG-2, MPEG-4 AVC and VC- 1. In this way, the energy conservation effect can be improved by switching between drive frequencies according to the standard with which the video data conforms. Furthermore, when the LSI ex500 or the device including the LSI ex500 is powered using a battery, the battery life can be extended with the energy conservation effect. Modality 10 There are cases where a plurality of video data that conforms to a different standard is provided for devices and systems such as a television and a mobile phone. In order to enable decoding of the plurality of video data that conform to the different standards, the signal processing unit ex507 of the LSI ex500 needs to conform to the different standards. However, the problems of scaling up the LSI ex500 circuit and increasing the cost arise with the individual use of the ex507 signal processing units that comply with the respective standards. In order to solve the problem, what is conceived is a configuration in which the decoding processing unit for implementing the moving picture decoding method described in each of the embodiments and the decoding processing unit according to the conventional standard, such as MPEG-2, MPEG-4 AVC and VC-1, are partially shared. Ex900 in Figure 32A shows an example of the configuration. For example, the moving picture decoding method described in each of the embodiments and the moving picture decoding method that conforms to MPEG-4 AVC have partially in common the processing details such as coding of entropy, inverse quantification, block-reducing filtering, and compensated motion prediction. Processing details to be shared will likely include use of an ex902 decode processing unit that conforms to MPEG-4 AVC. In contrast, a dedicated ex901 decode processing unit is likely used for other processing unique to the present invention. Since the present invention is characterized by motion compensation in particular, for example, the dedicated decoding processing unit ex901 is used for motion compensation. Otherwise, the decoding processing unit is likely shared for one of entropy encoding, inverse quantization, block-reducing filtering, and inverse quantization, or for all processing. The decoding processing unit for implementing the moving image decoding method described in each of the modalities can be shared for the processing to be shared, and a dedicated decoding processing unit can be used for processing unique to that of the standard. MPEG-4 AVC. Also, ex1000 in Fig. 32B shows another example where processing is partially shared. This example uses a configuration including a dedicated ex1001 decoding processing unit that supports processing unique to the present invention, a dedicated ex1002 decoding processing unit that supports exclusive processing for another conventional standard, and a decoding processing unit ex1003 which supports processing to be shared between the moving image decoding method of the present invention and the conventional moving image decoding method. Here, the dedicated decoding processing units ex1001 and ex1002 are not necessarily specialized for the processing of the present invention and for the processing of the conventional pattern, respectively, and may be the processing units capable of implementing general processing. In addition, Modality 10 configuration can be implemented by LSI ex500. As such, reduction of the circuit scaling of an LSI and cost reduction are possible by sharing the decoding processing unit for the processing to be shared between the moving image decoding method of the present invention and the decoding method motion picture in accordance with the conventional standard. Industrial Applicability The image encoding method and the image decoding method according to the present invention are applicable, for example, to televisions, digital video recorders, car navigation systems, cell phones, digital cameras and video cameras digital. List of Reference Symbols 100, 300 Image coding apparatus 101 Subtraction unit 102 Orthogonal transform unit 103 Quantize unit 5 104 Variable length coding unit 105, 205 Inverse quantize unit 106, 206 Inverse orthogonal transform unit 107 , 207 Addition unit 108, 208 Block memory 10 109, 209 Frame memory 110, 210 Intra prediction unit 111,211 Inter prediction unit 112, 212 Switching unit 113 Image type determination unit 15 114, 214 Unit of inter prediction control 115, 215 Reference image list management unit 116, 216 Addition determination unit 200, 400 Image decoding apparatus 20 204 Variable length decoding unit 301,401, 501 Addition unit 302, 402, 502 Selection unit 303, 503 Selection unit 403, 504 Decoding unit 25 500 Image encoding and decoding device
权利要求:
Claims (6) [0001] 1. Picture encoding method of encoding a current picture per block with bi-prediction using both of (i) a first reference picture list including a current first reference picture for a current block, the current first reference picture being referred to by a first current motion vector, and (ii) a second list of reference images including a second current reference image for the current block, the second current reference image being referred to by a second current motion vector, said method comprising: judging (S302) whether the second current reference picture for the current block is identical to a second adjacent reference picture for an adjacent block, said adjacent block being adjacent to the current block and encoded with bi-prediction, the second adjacent reference image being (i) included in a second adjacent reference image list and (ii) referenced by a second motion vector. adjacent ment; when the second current reference picture is considered identical to the second adjacent reference picture, adding (S303) the second adjacent motion vector to a list of candidates for the second current motion vector; judging (S403) whether or not the current second reference picture is identical to a first adjacent reference picture for the adjacent block, the adjacent first reference picture (i) being included in a first adjacent reference picture list and (ii) ) referred to by the first adjacent motion vector; when the second current reference picture is considered identical to the first adjacent reference picture, adding (S404) the first adjacent motion vector to the candidate list for the current second motion vector; selecting (S107, S112) a predicted motion vector to be used to encode the second current motion vector from the candidate list for the current second motion vector; and encoding (S107, S112) the second current motion vector using the selected predicted motion vector, wherein the second judgment step (S403) is performed only when the second current reference picture is considered not identical to the second reference picture adjacent. [0002] 2. Picture coding apparatus for encoding a current picture per block with double prediction using both (i) a first reference picture list including a current first reference picture for a current block, the current first reference picture being referred to by a first current motion vector and (ii) a second list of reference images including a second current reference image for the current block, the second current reference image being referred to by a second current motion vector, said apparatus comprising: an addition unit (301) configured to: judge (S302) whether the second current reference picture for the current block is identical to a second adjacent reference picture for an adjacent block, said adjacent block being adjacent to the current block and encoded with bi-prediction, and the second adjacent reference image being (i) included in a second reference image list adjacent and (ii) referred to by the second adjacent motion vector, when the second current reference picture is considered identical to the second adjacent reference picture, adding (S303) the second adjacent motion vector to a list of candidates for the second motion vector. current motion, judging (S403) whether or not the second current reference picture is identical to a first adjacent reference picture for the adjacent block, the first adjacent reference picture (i) being included in a first adjacent reference picture list and (ii) referred to by the first adjacent motion vector, when the second current reference picture is considered identical to the first adjacent reference picture, adding (S404) the first adjacent motion vector to the candidate list for the second motion vector. current movement; a selection unit (302) configured to select a predicted motion vector to be used to encode the second current motion vector from the candidate list to the current second motion vector; and an encoding unit (303) configured to encode the second current motion vector using the selected predicted motion vector, wherein the second judgment step (S403) is performed only when the second current reference picture is considered not identical to the second adjacent reference image. [0003] A picture encoding method according to claim 1, wherein when the second current reference picture is considered not identical to the adjacent first reference picture, the adjacent first motion vector is not added to the candidate list for the second current motion vector, and wherein when the second current reference image is considered to be non-identical with the second adjacent reference image, the second adjacent motion vector is not added to the candidate list for the second current motion vector. [0004] 4. Picture decoding method of decoding one current picture per block with bi-prediction using both of (i) a first list of reference pictures including a first current reference picture for a current block, the first current reference picture being referred to by a first current motion vector, and (ii) a second list of reference images including a second current reference image for the current block, the second current reference image being referred to by a second current motion vector, said method comprising: judging (S302) whether the second current reference picture for the current block is identical to a second adjacent reference picture for an adjacent block, said adjacent block being adjacent to the current block and decoded with bi-prediction, the second adjacent reference image being (i) included in a second adjacent reference image list and (ii) referenced by a second vector d and adjacent movement; when the second current reference picture is considered identical to the second adjacent reference picture, adding (S303) the second adjacent motion vector to a list of candidates for the second current motion vector; judging (S403) whether or not the current second reference picture is identical to a first adjacent reference picture for the adjacent block, the adjacent first reference picture (i) being included in a first adjacent reference picture list and (ii) ) referred to by the first adjacent motion vector; when the second current reference picture is considered identical to the first adjacent reference picture, adding (S404) the first adjacent motion vector to the candidate list for the current second motion vector; selecting (S606, S611) a predicted motion vector to be used to decode the current second motion vector from the candidate list to the current second motion vector; and decoding (S606, S611) the second current motion vector using the selected predicted motion vector, wherein the second judgment step (S403) is performed when the second current reference picture is considered not identical to the adjacent second reference picture . [0005] 5. Picture encoding apparatus which encodes a current picture per block with bi-prediction using both of (i) a first reference picture list including a current first reference picture for a current block, the current first reference picture being referred to by a first current motion vector, and (ii) a second list of reference images including a second current reference image for the current block, the second current reference image being referred to by a second current motion vector, said apparatus comprising: an addition unit (401) configured to judge (S302) whether the second current reference image for the current block is identical to a second adjacent reference image for an adjacent block, said adjacent block being adjacent to the block. current co is encoded with bi-prediction, and the second adjacent reference picture being (i) included in a second list of adjac reference pictures between and (ii) referred to by the second adjacent motion vector, when the second current reference image is considered identical to the second adjacent reference image, add (S303) the second adjacent motion vector to a list of candidates for the second reference vector. current motion, judging (S403) whether or not the second current reference picture is identical to a first adjacent reference picture for the adjacent block, the first adjacent reference picture (i) being included in a first adjacent reference picture list and (ii) referred to by the first adjacent motion vector, when the second current reference picture is considered identical to the first adjacent reference picture, adding (S404) the first adjacent motion vector to the candidate list for the second current motion vector ; a selection unit (402) configured to select a predicted motion vector to be used to decode the second current motion vector from the candidate list to the current second motion vector; and a decoding unit (403) configured to decode the second current motion vector using the selected predicted motion vector, wherein the second judgment step (S403) is performed when the second current reference picture is considered non-identical to the second adjacent reference image. [0006] 6. Image decoding method according to claim 4, characterized in that when the second current reference image is considered to be non-identical to the adjacent first reference image, the adjacent first motion vector is not is added to the candidate list for the second current motion vector, and where when the second current reference image is considered not identical to the second adjacent reference image, the second adjacent motion vector is not added to the candidate list for the second current motion vector.
类似技术:
公开号 | 公开日 | 专利标题 JP6167409B2|2017-07-26|Image decoding method and image decoding apparatus US10951911B2|2021-03-16|Image decoding method and image decoding apparatus using candidate motion vectors ES2621231T3|2017-07-03|Motion video coding method, motion video coding apparatus, motion video decoding method, motion video decoding apparatus and motion video coding / decoding apparatus RU2614542C2|2017-03-28|Video encoding method, video encoding device, video decoding method, video decoding device and apparatus for encoding/decoding video ES2834902T3|2021-06-21|Image decoding method, and image decoding device DK2717575T3|2019-01-28|PICTURE CODING PROCEDURE AND PICTURE CODES CA2825730C|2018-01-16|Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus CA2866121C|2018-04-24|Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus AU2012260302A1|2013-11-07|Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus CA2843560A1|2013-02-07|Video encoding method, video encoding apparatus, video decoding method, video decoding apparatus, and video encoding/decoding apparatus BR112012021600B1|2021-02-09|unlock filtering method to filter a plurality of blocks included in an image, method and coding apparatus for encoding an image JP6551894B2|2019-07-31|Moving picture decoding method and moving picture decoding apparatus EP2822277A1|2015-01-07|Image coding method, image decoding method, image coding device, image decoding device, and image coding-decoding device EP2871838A1|2015-05-13|Image encoding method, image decoding method, image encoding device and image decoding device WO2012090495A1|2012-07-05|Image encoding method and image decoding method JP6004375B2|2016-10-05|Image encoding method and image decoding method AU2011306322B2|2016-06-02|Image coding method, image decoding method, image coding apparatus, and image decoding apparatus
同族专利:
公开号 | 公开日 US9264726B2|2016-02-16| US9998736B2|2018-06-12| CA2805663C|2017-11-21| PL2661087T3|2017-10-31| ES2637615T3|2017-10-13| US9445105B2|2016-09-13| EP2661087A4|2014-10-29| US20160105674A1|2016-04-14| US10880545B2|2020-12-29| JP6167409B2|2017-07-26| MX2013000995A|2013-03-22| US10638128B2|2020-04-28| EP3200463B1|2018-10-31| AU2016202666B2|2017-08-31| CN103004205B|2017-02-08| US20150229930A1|2015-08-13| US20120163466A1|2012-06-28| US9729877B2|2017-08-08| US20160353102A1|2016-12-01| JP6008291B2|2016-10-19| KR101790378B1|2017-10-25| US20210076030A1|2021-03-11| EP2661087A1|2013-11-06| WO2012090491A1|2012-07-05| CN106878749B|2019-09-24| US9049455B2|2015-06-02| AU2011353405B2|2016-01-28| CN103004205A|2013-03-27| US20180262753A1|2018-09-13| US20190132588A1|2019-05-02| US20170272746A1|2017-09-21| US10574983B2|2020-02-25| PL3200463T3|2019-03-29| US20200154102A1|2020-05-14| KR20140029348A|2014-03-10| JP2017011746A|2017-01-12| CA2805663A1|2012-07-05| EP2661087B1|2017-05-24| EP3200463A1|2017-08-02| CN106878749A|2017-06-20| AU2016202666A1|2016-05-26| SG187185A1|2013-02-28| BR112013002448A2|2018-01-23| JPWO2012090491A1|2014-06-05| ES2704960T3|2019-03-20| AU2011353405A1|2013-02-07|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US6625211B1|1999-02-25|2003-09-23|Matsushita Electric Industrial Co., Ltd.|Method and apparatus for transforming moving picture coding system| US7321626B2|2002-03-08|2008-01-22|Sharp Laboratories Of America, Inc.|System and method for predictive motion estimation using a global motion predictor| EP1827029A1|2002-01-18|2007-08-29|Kabushiki Kaisha Toshiba|Video decoding method and apparatus| JP2004208258A|2002-04-19|2004-07-22|Matsushita Electric Ind Co Ltd|Motion vector calculating method| HUE044616T2|2002-04-19|2019-11-28|Panasonic Ip Corp America|Motion vector calculating method| EP2271106B1|2002-04-19|2016-05-25|Panasonic Intellectual Property Corporation of America|Motion vector calculating method| JP2004023458A|2002-06-17|2004-01-22|Toshiba Corp|Moving picture encoding/decoding method and apparatus| KR100865034B1|2002-07-18|2008-10-23|엘지전자 주식회사|Method for predicting motion vector| KR100990829B1|2002-11-01|2010-10-29|파나소닉 주식회사|Motion picture encoding method and motion picture decoding method| US7400681B2|2003-11-28|2008-07-15|Scientific-Atlanta, Inc.|Low-complexity motion vector prediction for video codec with two lists of reference pictures| KR100631768B1|2004-04-14|2006-10-09|삼성전자주식회사|Interframe Prediction Method and Video Encoder, Video Decoding Method and Video Decoder in Video Coding| JP4702943B2|2005-10-19|2011-06-15|キヤノン株式会社|Image processing apparatus and method| EP3179720B1|2006-03-16|2019-07-24|Huawei Technologies Co., Ltd.|Quantization method and apparatus in encoding/decoding| JP4822940B2|2006-06-02|2011-11-24|キヤノン株式会社|Image processing apparatus and image processing method| JP4884290B2|2007-05-07|2012-02-29|パナソニック株式会社|Moving picture decoding integrated circuit, moving picture decoding method, moving picture decoding apparatus, and moving picture decoding program| JP4650461B2|2007-07-13|2011-03-16|ソニー株式会社|Encoding device, encoding method, program, and recording medium| WO2010021700A1|2008-08-19|2010-02-25|Thomson Licensing|A propagation map| JPWO2010035730A1|2008-09-24|2012-02-23|ソニー株式会社|Image processing apparatus and method| US20100166073A1|2008-12-31|2010-07-01|Advanced Micro Devices, Inc.|Multiple-Candidate Motion Estimation With Advanced Spatial Filtering of Differential Motion Vectors| JP5169978B2|2009-04-24|2013-03-27|ソニー株式会社|Image processing apparatus and method| US9060176B2|2009-10-01|2015-06-16|Ntt Docomo, Inc.|Motion vector prediction in video coding| CN105791859B|2010-05-26|2018-11-06|Lg电子株式会社|Method and apparatus for handling vision signal| US9300970B2|2010-07-09|2016-03-29|Samsung Electronics Co., Ltd.|Methods and apparatuses for encoding and decoding motion vector| US9124898B2|2010-07-12|2015-09-01|Mediatek Inc.|Method and apparatus of temporal motion vector prediction| US9398308B2|2010-07-28|2016-07-19|Qualcomm Incorporated|Coding motion prediction direction in video coding| US8824558B2|2010-11-23|2014-09-02|Mediatek Inc.|Method and apparatus of spatial motion vector prediction| US20130128983A1|2010-12-27|2013-05-23|Toshiyasu Sugio|Image coding method and image decoding method| US9049455B2|2010-12-28|2015-06-02|Panasonic Intellectual Property Corporation Of America|Image coding method of coding a current picture with prediction using one or both of a first reference picture list including a first current reference picture for a current block and a second reference picture list including a second current reference picture for the current block|US20130128983A1|2010-12-27|2013-05-23|Toshiyasu Sugio|Image coding method and image decoding method| US9049455B2|2010-12-28|2015-06-02|Panasonic Intellectual Property Corporation Of America|Image coding method of coding a current picture with prediction using one or both of a first reference picture list including a first current reference picture for a current block and a second reference picture list including a second current reference picture for the current block| US9635382B2|2011-01-07|2017-04-25|Texas Instruments Incorporated|Method, system and computer program product for determining a motion vector| US20130322535A1|2011-02-21|2013-12-05|Electronics And Telecommunications Research Institute|Method for encoding and decoding images using plurality of reference images and device using method| MX2013010231A|2011-04-12|2013-10-25|Panasonic Corp|Motion-video encoding method, motion-video encoding apparatus, motion-video decoding method, motion-video decoding apparatus, and motion-video encoding/decoding apparatus.| TWI526056B|2011-04-27|2016-03-11|Jvc Kenwood Corp|A moving picture coding apparatus, a motion picture coding method, a transmission picture coding program, a transmission apparatus, a transmission method, a transmission program, a video decoding apparatus, a video decoding method, a video decoding program, a reception device, a Reception method, Receiving program| US9485518B2|2011-05-27|2016-11-01|Sun Patent Trust|Decoding method and apparatus with candidate motion vectors| MX2013012132A|2011-05-27|2013-10-30|Panasonic Corp|Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device.| KR101889582B1|2011-05-31|2018-08-20|선 페이턴트 트러스트|Video encoding method, video encoding device, video decoding method, video decoding device, and video encoding/decoding device| GB2491589B|2011-06-06|2015-12-16|Canon Kk|Method and device for encoding a sequence of images and method and device for decoding a sequence of image| KR102083012B1|2011-06-28|2020-02-28|엘지전자 주식회사|Method for setting motion vector list and apparatus using same| PL2728878T3|2011-06-30|2020-06-15|Sun Patent Trust|Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device| JP2013098933A|2011-11-04|2013-05-20|Sony Corp|Image processing device and method| WO2013139250A1|2012-03-22|2013-09-26|Mediatek Inc.|Method and apparatus of scalable video coding| US10200710B2|2012-07-02|2019-02-05|Samsung Electronics Co., Ltd.|Motion vector prediction method and apparatus for encoding or decoding video| US9325990B2|2012-07-09|2016-04-26|Qualcomm Incorporated|Temporal motion vector prediction in video coding extensions| US9699450B2|2012-10-04|2017-07-04|Qualcomm Incorporated|Inter-view predicted motion vector for 3D video| CN102883163B|2012-10-08|2014-05-28|华为技术有限公司|Method and device for building motion vector lists for prediction of motion vectors| CN102946536B|2012-10-09|2015-09-30|华为技术有限公司|The method of candidate vector list builder and device| CN104904209B|2013-01-07|2018-07-24|Lg 电子株式会社|Video signal processing method| US9628795B2|2013-07-17|2017-04-18|Qualcomm Incorporated|Block identification using disparity vector in video coding| CN104079944B|2014-06-30|2017-12-01|华为技术有限公司|The motion vector list construction method and system of Video coding| US20190289315A1|2018-03-14|2019-09-19|Mediatek Inc.|Methods and Apparatuses of Generating Average Candidates in Video Coding Systems| CN110365987A|2018-04-09|2019-10-22|杭州海康威视数字技术股份有限公司|A kind of motion vector determines method, apparatus and its equipment| GB2588528A|2018-06-29|2021-04-28|Beijing Bytedance Network Tech Co Ltd|Selection of coded motion information for LUT updating| WO2020003282A1|2018-06-29|2020-01-02|Beijing Bytedance Network Technology Co., Ltd.|Managing motion vector predictors for video coding| JP2021530936A|2018-06-29|2021-11-11|北京字節跳動網絡技術有限公司Beijing Bytedance Network Technology Co., Ltd.|Look-up table updates: FIFO, restricted FIFO| EP3794824A1|2018-06-29|2021-03-24|Beijing Bytedance Network Technology Co. Ltd.|Conditions for updating luts| CN110662064A|2018-06-29|2020-01-07|北京字节跳动网络技术有限公司|Checking order of motion candidates in LUT| TWI735902B|2018-07-02|2021-08-11|大陸商北京字節跳動網絡技術有限公司|Lookup table with intra frame prediction and intra frame predication from non adjacent blocks| CN110876282A|2018-07-02|2020-03-10|华为技术有限公司|Motion vector prediction method and related device| CN112840651A|2018-09-12|2021-05-25|华为技术有限公司|Sign value and absolute value indicating increments of image sequence numbers| TW202025760A|2018-09-12|2020-07-01|大陸商北京字節跳動網絡技術有限公司|How many hmvp candidates to be checked| WO2020133518A1|2018-12-29|2020-07-02|深圳市大疆创新科技有限公司|Video processing method and device| CN113383554A|2019-01-13|2021-09-10|北京字节跳动网络技术有限公司|Interaction between LUTs and shared Merge lists| WO2020263499A1|2019-06-24|2020-12-30|Alibaba Group Holding Limited|Adaptive resolution change in video processing|
法律状态:
2018-03-06| B25A| Requested transfer of rights approved|Owner name: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AME | 2018-03-27| B25A| Requested transfer of rights approved|Owner name: SUN PATENT TRUST (US) | 2018-03-27| B15K| Others concerning applications: alteration of classification|Ipc: H04N 7/00 (2011.01) | 2018-12-18| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-03-17| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-03-17| B15K| Others concerning applications: alteration of classification|Free format text: A CLASSIFICACAO ANTERIOR ERA: H04N 7/00 Ipc: H04N 19/105 (2014.01), H04N 19/107 (2014.01), H04N | 2021-07-06| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-08-31| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 27/12/2011, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201061427587P| true| 2010-12-28|2010-12-28| US61/427,587|2010-12-28| PCT/JP2011/007309|WO2012090491A1|2010-12-28|2011-12-27|Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|